00:00:00.000 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 87 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3265 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.046 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.047 The recommended git tool is: git 00:00:00.047 using credential 00000000-0000-0000-0000-000000000002 00:00:00.049 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.070 Fetching changes from the remote Git repository 00:00:00.073 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.107 Using shallow fetch with depth 1 00:00:00.107 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.107 > git --version # timeout=10 00:00:00.153 > git --version # 'git version 2.39.2' 00:00:00.153 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.190 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.190 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.843 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.853 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.863 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.863 > git config core.sparsecheckout # timeout=10 00:00:04.873 > git read-tree -mu HEAD # timeout=10 00:00:04.888 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.903 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.903 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.979 [Pipeline] Start of Pipeline 00:00:04.991 [Pipeline] library 00:00:04.992 Loading library shm_lib@master 00:00:04.993 Library shm_lib@master is cached. Copying from home. 00:00:05.010 [Pipeline] node 00:00:05.023 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:05.024 [Pipeline] { 00:00:05.033 [Pipeline] catchError 00:00:05.034 [Pipeline] { 00:00:05.044 [Pipeline] wrap 00:00:05.051 [Pipeline] { 00:00:05.056 [Pipeline] stage 00:00:05.057 [Pipeline] { (Prologue) 00:00:05.191 [Pipeline] sh 00:00:05.474 + logger -p user.info -t JENKINS-CI 00:00:05.488 [Pipeline] echo 00:00:05.489 Node: WFP20 00:00:05.494 [Pipeline] sh 00:00:05.785 [Pipeline] setCustomBuildProperty 00:00:05.796 [Pipeline] echo 00:00:05.797 Cleanup processes 00:00:05.800 [Pipeline] sh 00:00:06.080 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.080 3531243 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.091 [Pipeline] sh 00:00:06.369 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.369 ++ grep -v 'sudo pgrep' 00:00:06.369 ++ awk '{print $1}' 00:00:06.369 + sudo kill -9 00:00:06.369 + true 00:00:06.382 [Pipeline] cleanWs 00:00:06.391 [WS-CLEANUP] Deleting project workspace... 00:00:06.391 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.398 [WS-CLEANUP] done 00:00:06.402 [Pipeline] setCustomBuildProperty 00:00:06.413 [Pipeline] sh 00:00:06.694 + sudo git config --global --replace-all safe.directory '*' 00:00:06.777 [Pipeline] httpRequest 00:00:06.804 [Pipeline] echo 00:00:06.805 Sorcerer 10.211.164.101 is alive 00:00:06.811 [Pipeline] httpRequest 00:00:06.814 HttpMethod: GET 00:00:06.815 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.816 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.828 Response Code: HTTP/1.1 200 OK 00:00:06.828 Success: Status code 200 is in the accepted range: 200,404 00:00:06.829 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.459 [Pipeline] sh 00:00:09.744 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.763 [Pipeline] httpRequest 00:00:09.798 [Pipeline] echo 00:00:09.800 Sorcerer 10.211.164.101 is alive 00:00:09.810 [Pipeline] httpRequest 00:00:09.815 HttpMethod: GET 00:00:09.815 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:09.816 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:09.874 Response Code: HTTP/1.1 200 OK 00:00:09.875 Success: Status code 200 is in the accepted range: 200,404 00:00:09.875 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:22.104 [Pipeline] sh 00:01:22.387 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:24.937 [Pipeline] sh 00:01:25.222 + git -C spdk log --oneline -n5 00:01:25.222 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:25.222 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:25.222 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:25.222 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:01:25.222 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:01:25.242 [Pipeline] withCredentials 00:01:25.253 > git --version # timeout=10 00:01:25.267 > git --version # 'git version 2.39.2' 00:01:25.284 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:25.287 [Pipeline] { 00:01:25.297 [Pipeline] retry 00:01:25.300 [Pipeline] { 00:01:25.319 [Pipeline] sh 00:01:25.602 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:26.992 [Pipeline] } 00:01:27.017 [Pipeline] // retry 00:01:27.022 [Pipeline] } 00:01:27.042 [Pipeline] // withCredentials 00:01:27.051 [Pipeline] httpRequest 00:01:27.069 [Pipeline] echo 00:01:27.071 Sorcerer 10.211.164.101 is alive 00:01:27.077 [Pipeline] httpRequest 00:01:27.082 HttpMethod: GET 00:01:27.082 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:27.083 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:27.085 Response Code: HTTP/1.1 200 OK 00:01:27.085 Success: Status code 200 is in the accepted range: 200,404 00:01:27.086 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:31.110 [Pipeline] sh 00:01:31.394 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:32.787 [Pipeline] sh 00:01:33.071 + git -C dpdk log --oneline -n5 00:01:33.071 caf0f5d395 version: 22.11.4 00:01:33.071 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:33.071 dc9c799c7d vhost: fix missing spinlock unlock 00:01:33.071 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:33.071 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:33.082 [Pipeline] } 00:01:33.099 [Pipeline] // stage 00:01:33.109 [Pipeline] stage 00:01:33.112 [Pipeline] { (Prepare) 00:01:33.136 [Pipeline] writeFile 00:01:33.154 [Pipeline] sh 00:01:33.436 + logger -p user.info -t JENKINS-CI 00:01:33.449 [Pipeline] sh 00:01:33.731 + logger -p user.info -t JENKINS-CI 00:01:33.743 [Pipeline] sh 00:01:34.026 + cat autorun-spdk.conf 00:01:34.026 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.026 SPDK_RUN_UBSAN=1 00:01:34.026 SPDK_TEST_FUZZER=1 00:01:34.026 SPDK_TEST_FUZZER_SHORT=1 00:01:34.026 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:34.026 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:34.033 RUN_NIGHTLY=1 00:01:34.040 [Pipeline] readFile 00:01:34.068 [Pipeline] withEnv 00:01:34.070 [Pipeline] { 00:01:34.084 [Pipeline] sh 00:01:34.369 + set -ex 00:01:34.369 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:34.369 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:34.369 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.369 ++ SPDK_RUN_UBSAN=1 00:01:34.369 ++ SPDK_TEST_FUZZER=1 00:01:34.369 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:34.369 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:34.369 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:34.369 ++ RUN_NIGHTLY=1 00:01:34.369 + case $SPDK_TEST_NVMF_NICS in 00:01:34.369 + DRIVERS= 00:01:34.369 + [[ -n '' ]] 00:01:34.369 + exit 0 00:01:34.379 [Pipeline] } 00:01:34.398 [Pipeline] // withEnv 00:01:34.403 [Pipeline] } 00:01:34.419 [Pipeline] // stage 00:01:34.429 [Pipeline] catchError 00:01:34.431 [Pipeline] { 00:01:34.445 [Pipeline] timeout 00:01:34.446 Timeout set to expire in 30 min 00:01:34.447 [Pipeline] { 00:01:34.463 [Pipeline] stage 00:01:34.465 [Pipeline] { (Tests) 00:01:34.481 [Pipeline] sh 00:01:34.764 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:34.764 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:34.764 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:34.764 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:34.764 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:34.764 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:34.764 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:34.764 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:34.764 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:34.764 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:34.764 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:34.764 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:34.764 + source /etc/os-release 00:01:34.764 ++ NAME='Fedora Linux' 00:01:34.764 ++ VERSION='38 (Cloud Edition)' 00:01:34.764 ++ ID=fedora 00:01:34.764 ++ VERSION_ID=38 00:01:34.764 ++ VERSION_CODENAME= 00:01:34.764 ++ PLATFORM_ID=platform:f38 00:01:34.764 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:34.764 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:34.764 ++ LOGO=fedora-logo-icon 00:01:34.764 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:34.764 ++ HOME_URL=https://fedoraproject.org/ 00:01:34.764 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:34.764 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:34.764 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:34.764 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:34.764 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:34.764 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:34.764 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:34.764 ++ SUPPORT_END=2024-05-14 00:01:34.764 ++ VARIANT='Cloud Edition' 00:01:34.764 ++ VARIANT_ID=cloud 00:01:34.764 + uname -a 00:01:34.765 Linux spdk-wfp-20 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:34.765 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:37.300 Hugepages 00:01:37.300 node hugesize free / total 00:01:37.300 node0 1048576kB 0 / 0 00:01:37.300 node0 2048kB 0 / 0 00:01:37.300 node1 1048576kB 0 / 0 00:01:37.300 node1 2048kB 0 / 0 00:01:37.300 00:01:37.300 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:37.300 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:37.300 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:37.300 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:37.300 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:37.300 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:37.300 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:37.300 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:37.300 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:37.300 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:37.300 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:37.300 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:37.300 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:37.300 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:37.300 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:37.300 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:37.300 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:37.300 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:37.300 + rm -f /tmp/spdk-ld-path 00:01:37.300 + source autorun-spdk.conf 00:01:37.300 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.300 ++ SPDK_RUN_UBSAN=1 00:01:37.300 ++ SPDK_TEST_FUZZER=1 00:01:37.300 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:37.300 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:37.300 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:37.300 ++ RUN_NIGHTLY=1 00:01:37.300 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:37.300 + [[ -n '' ]] 00:01:37.300 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:37.300 + for M in /var/spdk/build-*-manifest.txt 00:01:37.300 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:37.300 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:37.300 + for M in /var/spdk/build-*-manifest.txt 00:01:37.300 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:37.300 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:37.300 ++ uname 00:01:37.300 + [[ Linux == \L\i\n\u\x ]] 00:01:37.300 + sudo dmesg -T 00:01:37.300 + sudo dmesg --clear 00:01:37.300 + dmesg_pid=3532738 00:01:37.300 + [[ Fedora Linux == FreeBSD ]] 00:01:37.300 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.300 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:37.300 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:37.300 + [[ -x /usr/src/fio-static/fio ]] 00:01:37.300 + export FIO_BIN=/usr/src/fio-static/fio 00:01:37.300 + FIO_BIN=/usr/src/fio-static/fio 00:01:37.300 + sudo dmesg -Tw 00:01:37.300 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:37.300 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:37.300 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:37.300 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.300 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:37.300 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:37.300 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.300 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:37.300 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:37.300 Test configuration: 00:01:37.300 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.300 SPDK_RUN_UBSAN=1 00:01:37.300 SPDK_TEST_FUZZER=1 00:01:37.300 SPDK_TEST_FUZZER_SHORT=1 00:01:37.300 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:37.300 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:37.560 RUN_NIGHTLY=1 19:49:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:37.560 19:49:25 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:37.560 19:49:25 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:37.560 19:49:25 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:37.560 19:49:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.560 19:49:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.560 19:49:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.560 19:49:25 -- paths/export.sh@5 -- $ export PATH 00:01:37.560 19:49:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.560 19:49:25 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:37.560 19:49:25 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:37.560 19:49:25 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720892965.XXXXXX 00:01:37.560 19:49:25 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720892965.p6gzeP 00:01:37.560 19:49:25 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:37.560 19:49:25 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:01:37.560 19:49:25 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:37.560 19:49:25 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk' 00:01:37.560 19:49:25 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:37.560 19:49:25 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:37.560 19:49:25 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:37.560 19:49:25 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:37.560 19:49:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.560 19:49:25 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user' 00:01:37.560 19:49:25 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:37.560 19:49:25 -- pm/common@17 -- $ local monitor 00:01:37.560 19:49:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.560 19:49:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.560 19:49:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.560 19:49:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.560 19:49:25 -- pm/common@25 -- $ sleep 1 00:01:37.560 19:49:25 -- pm/common@21 -- $ date +%s 00:01:37.560 19:49:25 -- pm/common@21 -- $ date +%s 00:01:37.560 19:49:25 -- pm/common@21 -- $ date +%s 00:01:37.560 19:49:25 -- pm/common@21 -- $ date +%s 00:01:37.560 19:49:25 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720892965 00:01:37.560 19:49:25 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720892965 00:01:37.560 19:49:25 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720892965 00:01:37.560 19:49:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720892965 00:01:37.560 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720892965_collect-vmstat.pm.log 00:01:37.560 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720892965_collect-cpu-load.pm.log 00:01:37.560 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720892965_collect-cpu-temp.pm.log 00:01:37.560 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720892965_collect-bmc-pm.bmc.pm.log 00:01:38.499 19:49:26 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:38.499 19:49:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:38.499 19:49:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:38.499 19:49:26 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:38.499 19:49:26 -- spdk/autobuild.sh@16 -- $ date -u 00:01:38.499 Sat Jul 13 05:49:26 PM UTC 2024 00:01:38.499 19:49:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:38.499 v24.05-13-g5fa2f5086 00:01:38.499 19:49:26 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:38.499 19:49:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:38.499 19:49:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:38.499 19:49:26 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:38.499 19:49:26 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:38.499 19:49:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.499 ************************************ 00:01:38.499 START TEST ubsan 00:01:38.499 ************************************ 00:01:38.499 19:49:26 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:38.499 using ubsan 00:01:38.499 00:01:38.499 real 0m0.001s 00:01:38.499 user 0m0.001s 00:01:38.499 sys 0m0.000s 00:01:38.499 19:49:26 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:38.499 19:49:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:38.499 ************************************ 00:01:38.499 END TEST ubsan 00:01:38.499 ************************************ 00:01:38.759 19:49:26 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:38.759 19:49:26 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:38.759 19:49:26 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:38.759 19:49:26 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:38.759 19:49:26 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:38.759 19:49:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.759 ************************************ 00:01:38.759 START TEST build_native_dpdk 00:01:38.759 ************************************ 00:01:38.759 19:49:26 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk ]] 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk log --oneline -n 5 00:01:38.759 caf0f5d395 version: 22.11.4 00:01:38.759 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:38.759 dc9c799c7d vhost: fix missing spinlock unlock 00:01:38.759 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:38.759 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:38.759 19:49:26 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:38.760 19:49:26 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:38.760 patching file config/rte_config.h 00:01:38.760 Hunk #1 succeeded at 60 (offset 1 line). 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:38.760 19:49:26 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:44.038 The Meson build system 00:01:44.038 Version: 1.3.1 00:01:44.038 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk 00:01:44.038 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp 00:01:44.038 Build type: native build 00:01:44.038 Program cat found: YES (/usr/bin/cat) 00:01:44.038 Project name: DPDK 00:01:44.038 Project version: 22.11.4 00:01:44.038 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:44.038 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:44.038 Host machine cpu family: x86_64 00:01:44.038 Host machine cpu: x86_64 00:01:44.038 Message: ## Building in Developer Mode ## 00:01:44.038 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:44.038 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:44.038 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:44.038 Program objdump found: YES (/usr/bin/objdump) 00:01:44.038 Program python3 found: YES (/usr/bin/python3) 00:01:44.038 Program cat found: YES (/usr/bin/cat) 00:01:44.038 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:44.038 Checking for size of "void *" : 8 00:01:44.038 Checking for size of "void *" : 8 (cached) 00:01:44.038 Library m found: YES 00:01:44.038 Library numa found: YES 00:01:44.038 Has header "numaif.h" : YES 00:01:44.038 Library fdt found: NO 00:01:44.038 Library execinfo found: NO 00:01:44.038 Has header "execinfo.h" : YES 00:01:44.038 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:44.038 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:44.038 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:44.038 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:44.038 Run-time dependency openssl found: YES 3.0.9 00:01:44.038 Run-time dependency libpcap found: YES 1.10.4 00:01:44.038 Has header "pcap.h" with dependency libpcap: YES 00:01:44.038 Compiler for C supports arguments -Wcast-qual: YES 00:01:44.038 Compiler for C supports arguments -Wdeprecated: YES 00:01:44.038 Compiler for C supports arguments -Wformat: YES 00:01:44.038 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:44.038 Compiler for C supports arguments -Wformat-security: NO 00:01:44.038 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.038 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:44.038 Compiler for C supports arguments -Wnested-externs: YES 00:01:44.038 Compiler for C supports arguments -Wold-style-definition: YES 00:01:44.038 Compiler for C supports arguments -Wpointer-arith: YES 00:01:44.038 Compiler for C supports arguments -Wsign-compare: YES 00:01:44.038 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:44.038 Compiler for C supports arguments -Wundef: YES 00:01:44.038 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.038 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:44.038 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:44.038 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.038 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:44.038 Compiler for C supports arguments -mavx512f: YES 00:01:44.038 Checking if "AVX512 checking" compiles: YES 00:01:44.038 Fetching value of define "__SSE4_2__" : 1 00:01:44.038 Fetching value of define "__AES__" : 1 00:01:44.038 Fetching value of define "__AVX__" : 1 00:01:44.038 Fetching value of define "__AVX2__" : 1 00:01:44.038 Fetching value of define "__AVX512BW__" : 1 00:01:44.038 Fetching value of define "__AVX512CD__" : 1 00:01:44.038 Fetching value of define "__AVX512DQ__" : 1 00:01:44.038 Fetching value of define "__AVX512F__" : 1 00:01:44.038 Fetching value of define "__AVX512VL__" : 1 00:01:44.038 Fetching value of define "__PCLMUL__" : 1 00:01:44.038 Fetching value of define "__RDRND__" : 1 00:01:44.038 Fetching value of define "__RDSEED__" : 1 00:01:44.038 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:44.038 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:44.038 Message: lib/kvargs: Defining dependency "kvargs" 00:01:44.038 Message: lib/telemetry: Defining dependency "telemetry" 00:01:44.038 Checking for function "getentropy" : YES 00:01:44.038 Message: lib/eal: Defining dependency "eal" 00:01:44.038 Message: lib/ring: Defining dependency "ring" 00:01:44.038 Message: lib/rcu: Defining dependency "rcu" 00:01:44.038 Message: lib/mempool: Defining dependency "mempool" 00:01:44.038 Message: lib/mbuf: Defining dependency "mbuf" 00:01:44.038 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:44.038 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.038 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:44.038 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:44.038 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:44.038 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:44.038 Compiler for C supports arguments -mpclmul: YES 00:01:44.038 Compiler for C supports arguments -maes: YES 00:01:44.038 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:44.038 Compiler for C supports arguments -mavx512bw: YES 00:01:44.038 Compiler for C supports arguments -mavx512dq: YES 00:01:44.038 Compiler for C supports arguments -mavx512vl: YES 00:01:44.038 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:44.038 Compiler for C supports arguments -mavx2: YES 00:01:44.038 Compiler for C supports arguments -mavx: YES 00:01:44.038 Message: lib/net: Defining dependency "net" 00:01:44.038 Message: lib/meter: Defining dependency "meter" 00:01:44.038 Message: lib/ethdev: Defining dependency "ethdev" 00:01:44.038 Message: lib/pci: Defining dependency "pci" 00:01:44.038 Message: lib/cmdline: Defining dependency "cmdline" 00:01:44.038 Message: lib/metrics: Defining dependency "metrics" 00:01:44.038 Message: lib/hash: Defining dependency "hash" 00:01:44.038 Message: lib/timer: Defining dependency "timer" 00:01:44.038 Fetching value of define "__AVX2__" : 1 (cached) 00:01:44.038 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.038 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:44.038 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:44.038 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:44.038 Message: lib/acl: Defining dependency "acl" 00:01:44.038 Message: lib/bbdev: Defining dependency "bbdev" 00:01:44.038 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:44.038 Run-time dependency libelf found: YES 0.190 00:01:44.038 Message: lib/bpf: Defining dependency "bpf" 00:01:44.038 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:44.038 Message: lib/compressdev: Defining dependency "compressdev" 00:01:44.038 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:44.038 Message: lib/distributor: Defining dependency "distributor" 00:01:44.038 Message: lib/efd: Defining dependency "efd" 00:01:44.038 Message: lib/eventdev: Defining dependency "eventdev" 00:01:44.038 Message: lib/gpudev: Defining dependency "gpudev" 00:01:44.038 Message: lib/gro: Defining dependency "gro" 00:01:44.038 Message: lib/gso: Defining dependency "gso" 00:01:44.038 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:44.038 Message: lib/jobstats: Defining dependency "jobstats" 00:01:44.038 Message: lib/latencystats: Defining dependency "latencystats" 00:01:44.038 Message: lib/lpm: Defining dependency "lpm" 00:01:44.038 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.038 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:44.038 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:44.038 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:44.038 Message: lib/member: Defining dependency "member" 00:01:44.038 Message: lib/pcapng: Defining dependency "pcapng" 00:01:44.038 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:44.038 Message: lib/power: Defining dependency "power" 00:01:44.038 Message: lib/rawdev: Defining dependency "rawdev" 00:01:44.038 Message: lib/regexdev: Defining dependency "regexdev" 00:01:44.038 Message: lib/dmadev: Defining dependency "dmadev" 00:01:44.038 Message: lib/rib: Defining dependency "rib" 00:01:44.038 Message: lib/reorder: Defining dependency "reorder" 00:01:44.038 Message: lib/sched: Defining dependency "sched" 00:01:44.038 Message: lib/security: Defining dependency "security" 00:01:44.038 Message: lib/stack: Defining dependency "stack" 00:01:44.038 Has header "linux/userfaultfd.h" : YES 00:01:44.038 Message: lib/vhost: Defining dependency "vhost" 00:01:44.038 Message: lib/ipsec: Defining dependency "ipsec" 00:01:44.038 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.038 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:44.038 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:44.038 Message: lib/fib: Defining dependency "fib" 00:01:44.038 Message: lib/port: Defining dependency "port" 00:01:44.038 Message: lib/pdump: Defining dependency "pdump" 00:01:44.038 Message: lib/table: Defining dependency "table" 00:01:44.038 Message: lib/pipeline: Defining dependency "pipeline" 00:01:44.038 Message: lib/graph: Defining dependency "graph" 00:01:44.038 Message: lib/node: Defining dependency "node" 00:01:44.038 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:44.038 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:44.038 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:44.038 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:44.038 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:44.039 Compiler for C supports arguments -Wno-unused-value: YES 00:01:44.039 Compiler for C supports arguments -Wno-format: YES 00:01:44.039 Compiler for C supports arguments -Wno-format-security: YES 00:01:44.039 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:44.299 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:44.299 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:44.299 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:44.299 Fetching value of define "__AVX2__" : 1 (cached) 00:01:44.299 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.299 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:44.299 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:44.299 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:44.299 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:44.299 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:44.299 Program doxygen found: YES (/usr/bin/doxygen) 00:01:44.299 Configuring doxy-api.conf using configuration 00:01:44.299 Program sphinx-build found: NO 00:01:44.299 Configuring rte_build_config.h using configuration 00:01:44.299 Message: 00:01:44.299 ================= 00:01:44.299 Applications Enabled 00:01:44.299 ================= 00:01:44.299 00:01:44.299 apps: 00:01:44.299 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:44.299 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:44.299 test-security-perf, 00:01:44.299 00:01:44.299 Message: 00:01:44.299 ================= 00:01:44.299 Libraries Enabled 00:01:44.299 ================= 00:01:44.299 00:01:44.299 libs: 00:01:44.299 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:44.299 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:44.299 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:44.299 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:44.299 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:44.299 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:44.299 table, pipeline, graph, node, 00:01:44.299 00:01:44.299 Message: 00:01:44.299 =============== 00:01:44.299 Drivers Enabled 00:01:44.299 =============== 00:01:44.299 00:01:44.299 common: 00:01:44.299 00:01:44.299 bus: 00:01:44.299 pci, vdev, 00:01:44.299 mempool: 00:01:44.299 ring, 00:01:44.299 dma: 00:01:44.299 00:01:44.299 net: 00:01:44.299 i40e, 00:01:44.299 raw: 00:01:44.299 00:01:44.299 crypto: 00:01:44.299 00:01:44.299 compress: 00:01:44.299 00:01:44.299 regex: 00:01:44.299 00:01:44.299 vdpa: 00:01:44.299 00:01:44.299 event: 00:01:44.299 00:01:44.299 baseband: 00:01:44.299 00:01:44.299 gpu: 00:01:44.299 00:01:44.299 00:01:44.299 Message: 00:01:44.299 ================= 00:01:44.299 Content Skipped 00:01:44.299 ================= 00:01:44.299 00:01:44.299 apps: 00:01:44.299 00:01:44.299 libs: 00:01:44.299 kni: explicitly disabled via build config (deprecated lib) 00:01:44.299 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:44.299 00:01:44.299 drivers: 00:01:44.299 common/cpt: not in enabled drivers build config 00:01:44.299 common/dpaax: not in enabled drivers build config 00:01:44.299 common/iavf: not in enabled drivers build config 00:01:44.299 common/idpf: not in enabled drivers build config 00:01:44.299 common/mvep: not in enabled drivers build config 00:01:44.299 common/octeontx: not in enabled drivers build config 00:01:44.299 bus/auxiliary: not in enabled drivers build config 00:01:44.299 bus/dpaa: not in enabled drivers build config 00:01:44.299 bus/fslmc: not in enabled drivers build config 00:01:44.299 bus/ifpga: not in enabled drivers build config 00:01:44.299 bus/vmbus: not in enabled drivers build config 00:01:44.299 common/cnxk: not in enabled drivers build config 00:01:44.299 common/mlx5: not in enabled drivers build config 00:01:44.299 common/qat: not in enabled drivers build config 00:01:44.299 common/sfc_efx: not in enabled drivers build config 00:01:44.299 mempool/bucket: not in enabled drivers build config 00:01:44.299 mempool/cnxk: not in enabled drivers build config 00:01:44.299 mempool/dpaa: not in enabled drivers build config 00:01:44.299 mempool/dpaa2: not in enabled drivers build config 00:01:44.299 mempool/octeontx: not in enabled drivers build config 00:01:44.299 mempool/stack: not in enabled drivers build config 00:01:44.299 dma/cnxk: not in enabled drivers build config 00:01:44.299 dma/dpaa: not in enabled drivers build config 00:01:44.299 dma/dpaa2: not in enabled drivers build config 00:01:44.299 dma/hisilicon: not in enabled drivers build config 00:01:44.299 dma/idxd: not in enabled drivers build config 00:01:44.299 dma/ioat: not in enabled drivers build config 00:01:44.299 dma/skeleton: not in enabled drivers build config 00:01:44.299 net/af_packet: not in enabled drivers build config 00:01:44.299 net/af_xdp: not in enabled drivers build config 00:01:44.299 net/ark: not in enabled drivers build config 00:01:44.299 net/atlantic: not in enabled drivers build config 00:01:44.299 net/avp: not in enabled drivers build config 00:01:44.299 net/axgbe: not in enabled drivers build config 00:01:44.299 net/bnx2x: not in enabled drivers build config 00:01:44.299 net/bnxt: not in enabled drivers build config 00:01:44.299 net/bonding: not in enabled drivers build config 00:01:44.299 net/cnxk: not in enabled drivers build config 00:01:44.299 net/cxgbe: not in enabled drivers build config 00:01:44.299 net/dpaa: not in enabled drivers build config 00:01:44.299 net/dpaa2: not in enabled drivers build config 00:01:44.299 net/e1000: not in enabled drivers build config 00:01:44.299 net/ena: not in enabled drivers build config 00:01:44.299 net/enetc: not in enabled drivers build config 00:01:44.299 net/enetfec: not in enabled drivers build config 00:01:44.299 net/enic: not in enabled drivers build config 00:01:44.299 net/failsafe: not in enabled drivers build config 00:01:44.299 net/fm10k: not in enabled drivers build config 00:01:44.299 net/gve: not in enabled drivers build config 00:01:44.299 net/hinic: not in enabled drivers build config 00:01:44.299 net/hns3: not in enabled drivers build config 00:01:44.299 net/iavf: not in enabled drivers build config 00:01:44.299 net/ice: not in enabled drivers build config 00:01:44.299 net/idpf: not in enabled drivers build config 00:01:44.299 net/igc: not in enabled drivers build config 00:01:44.299 net/ionic: not in enabled drivers build config 00:01:44.299 net/ipn3ke: not in enabled drivers build config 00:01:44.299 net/ixgbe: not in enabled drivers build config 00:01:44.299 net/kni: not in enabled drivers build config 00:01:44.299 net/liquidio: not in enabled drivers build config 00:01:44.299 net/mana: not in enabled drivers build config 00:01:44.299 net/memif: not in enabled drivers build config 00:01:44.299 net/mlx4: not in enabled drivers build config 00:01:44.299 net/mlx5: not in enabled drivers build config 00:01:44.299 net/mvneta: not in enabled drivers build config 00:01:44.299 net/mvpp2: not in enabled drivers build config 00:01:44.299 net/netvsc: not in enabled drivers build config 00:01:44.299 net/nfb: not in enabled drivers build config 00:01:44.299 net/nfp: not in enabled drivers build config 00:01:44.299 net/ngbe: not in enabled drivers build config 00:01:44.299 net/null: not in enabled drivers build config 00:01:44.299 net/octeontx: not in enabled drivers build config 00:01:44.299 net/octeon_ep: not in enabled drivers build config 00:01:44.299 net/pcap: not in enabled drivers build config 00:01:44.299 net/pfe: not in enabled drivers build config 00:01:44.299 net/qede: not in enabled drivers build config 00:01:44.299 net/ring: not in enabled drivers build config 00:01:44.299 net/sfc: not in enabled drivers build config 00:01:44.299 net/softnic: not in enabled drivers build config 00:01:44.299 net/tap: not in enabled drivers build config 00:01:44.299 net/thunderx: not in enabled drivers build config 00:01:44.299 net/txgbe: not in enabled drivers build config 00:01:44.299 net/vdev_netvsc: not in enabled drivers build config 00:01:44.299 net/vhost: not in enabled drivers build config 00:01:44.299 net/virtio: not in enabled drivers build config 00:01:44.299 net/vmxnet3: not in enabled drivers build config 00:01:44.299 raw/cnxk_bphy: not in enabled drivers build config 00:01:44.299 raw/cnxk_gpio: not in enabled drivers build config 00:01:44.299 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:44.299 raw/ifpga: not in enabled drivers build config 00:01:44.299 raw/ntb: not in enabled drivers build config 00:01:44.299 raw/skeleton: not in enabled drivers build config 00:01:44.299 crypto/armv8: not in enabled drivers build config 00:01:44.299 crypto/bcmfs: not in enabled drivers build config 00:01:44.299 crypto/caam_jr: not in enabled drivers build config 00:01:44.299 crypto/ccp: not in enabled drivers build config 00:01:44.299 crypto/cnxk: not in enabled drivers build config 00:01:44.299 crypto/dpaa_sec: not in enabled drivers build config 00:01:44.299 crypto/dpaa2_sec: not in enabled drivers build config 00:01:44.299 crypto/ipsec_mb: not in enabled drivers build config 00:01:44.299 crypto/mlx5: not in enabled drivers build config 00:01:44.299 crypto/mvsam: not in enabled drivers build config 00:01:44.299 crypto/nitrox: not in enabled drivers build config 00:01:44.299 crypto/null: not in enabled drivers build config 00:01:44.299 crypto/octeontx: not in enabled drivers build config 00:01:44.299 crypto/openssl: not in enabled drivers build config 00:01:44.299 crypto/scheduler: not in enabled drivers build config 00:01:44.299 crypto/uadk: not in enabled drivers build config 00:01:44.299 crypto/virtio: not in enabled drivers build config 00:01:44.299 compress/isal: not in enabled drivers build config 00:01:44.299 compress/mlx5: not in enabled drivers build config 00:01:44.299 compress/octeontx: not in enabled drivers build config 00:01:44.299 compress/zlib: not in enabled drivers build config 00:01:44.299 regex/mlx5: not in enabled drivers build config 00:01:44.299 regex/cn9k: not in enabled drivers build config 00:01:44.299 vdpa/ifc: not in enabled drivers build config 00:01:44.299 vdpa/mlx5: not in enabled drivers build config 00:01:44.299 vdpa/sfc: not in enabled drivers build config 00:01:44.299 event/cnxk: not in enabled drivers build config 00:01:44.299 event/dlb2: not in enabled drivers build config 00:01:44.299 event/dpaa: not in enabled drivers build config 00:01:44.299 event/dpaa2: not in enabled drivers build config 00:01:44.299 event/dsw: not in enabled drivers build config 00:01:44.299 event/opdl: not in enabled drivers build config 00:01:44.299 event/skeleton: not in enabled drivers build config 00:01:44.299 event/sw: not in enabled drivers build config 00:01:44.299 event/octeontx: not in enabled drivers build config 00:01:44.299 baseband/acc: not in enabled drivers build config 00:01:44.299 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:44.299 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:44.299 baseband/la12xx: not in enabled drivers build config 00:01:44.299 baseband/null: not in enabled drivers build config 00:01:44.299 baseband/turbo_sw: not in enabled drivers build config 00:01:44.299 gpu/cuda: not in enabled drivers build config 00:01:44.299 00:01:44.299 00:01:44.299 Build targets in project: 311 00:01:44.299 00:01:44.299 DPDK 22.11.4 00:01:44.299 00:01:44.299 User defined options 00:01:44.299 libdir : lib 00:01:44.299 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:01:44.299 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:44.299 c_link_args : 00:01:44.299 enable_docs : false 00:01:44.299 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:44.299 enable_kmods : false 00:01:44.299 machine : native 00:01:44.299 tests : false 00:01:44.299 00:01:44.299 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.300 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:44.300 19:49:31 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp -j112 00:01:44.300 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp' 00:01:44.565 [1/740] Generating lib/rte_kvargs_def with a custom command 00:01:44.565 [2/740] Generating lib/rte_kvargs_mingw with a custom command 00:01:44.565 [3/740] Generating lib/rte_telemetry_def with a custom command 00:01:44.565 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:01:44.565 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:44.565 [6/740] Generating lib/rte_eal_mingw with a custom command 00:01:44.565 [7/740] Generating lib/rte_ring_mingw with a custom command 00:01:44.565 [8/740] Generating lib/rte_mempool_def with a custom command 00:01:44.565 [9/740] Generating lib/rte_ring_def with a custom command 00:01:44.566 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:44.566 [11/740] Generating lib/rte_eal_def with a custom command 00:01:44.566 [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:44.566 [13/740] Generating lib/rte_mbuf_def with a custom command 00:01:44.566 [14/740] Generating lib/rte_rcu_def with a custom command 00:01:44.566 [15/740] Generating lib/rte_rcu_mingw with a custom command 00:01:44.566 [16/740] Generating lib/rte_mempool_mingw with a custom command 00:01:44.566 [17/740] Generating lib/rte_net_mingw with a custom command 00:01:44.566 [18/740] Generating lib/rte_mbuf_mingw with a custom command 00:01:44.566 [19/740] Generating lib/rte_meter_def with a custom command 00:01:44.566 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:44.566 [21/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:44.566 [22/740] Generating lib/rte_net_def with a custom command 00:01:44.566 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:44.566 [24/740] Generating lib/rte_meter_mingw with a custom command 00:01:44.566 [25/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:44.566 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:44.566 [27/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:44.566 [28/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:44.566 [29/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:44.566 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:44.566 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:44.566 [32/740] Generating lib/rte_ethdev_mingw with a custom command 00:01:44.566 [33/740] Generating lib/rte_ethdev_def with a custom command 00:01:44.566 [34/740] Generating lib/rte_pci_def with a custom command 00:01:44.566 [35/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:44.566 [36/740] Generating lib/rte_pci_mingw with a custom command 00:01:44.566 [37/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:44.826 [38/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:44.826 [39/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:44.826 [40/740] Linking static target lib/librte_kvargs.a 00:01:44.826 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:44.826 [42/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:44.826 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:44.826 [44/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:44.826 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:44.826 [46/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:44.826 [47/740] Generating lib/rte_cmdline_mingw with a custom command 00:01:44.826 [48/740] Generating lib/rte_cmdline_def with a custom command 00:01:44.826 [49/740] Generating lib/rte_metrics_mingw with a custom command 00:01:44.826 [50/740] Generating lib/rte_metrics_def with a custom command 00:01:44.826 [51/740] Generating lib/rte_hash_def with a custom command 00:01:44.826 [52/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:44.826 [53/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:44.826 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:44.826 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:44.826 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:44.826 [57/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:44.826 [58/740] Generating lib/rte_hash_mingw with a custom command 00:01:44.826 [59/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:44.826 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:44.826 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:44.826 [62/740] Generating lib/rte_timer_def with a custom command 00:01:44.826 [63/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:44.826 [64/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:44.826 [65/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:44.826 [66/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:44.826 [67/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:44.826 [68/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:44.826 [69/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:44.826 [70/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:44.826 [71/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:44.826 [72/740] Generating lib/rte_timer_mingw with a custom command 00:01:44.826 [73/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:44.826 [74/740] Generating lib/rte_bbdev_def with a custom command 00:01:44.826 [75/740] Generating lib/rte_acl_def with a custom command 00:01:44.826 [76/740] Generating lib/rte_bbdev_mingw with a custom command 00:01:44.826 [77/740] Generating lib/rte_acl_mingw with a custom command 00:01:44.826 [78/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:44.826 [79/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:44.826 [80/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:44.826 [81/740] Generating lib/rte_bitratestats_mingw with a custom command 00:01:44.826 [82/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:44.826 [83/740] Generating lib/rte_bitratestats_def with a custom command 00:01:44.826 [84/740] Generating lib/rte_bpf_def with a custom command 00:01:44.826 [85/740] Linking static target lib/librte_meter.a 00:01:44.826 [86/740] Generating lib/rte_bpf_mingw with a custom command 00:01:44.826 [87/740] Linking static target lib/librte_pci.a 00:01:44.826 [88/740] Generating lib/rte_cfgfile_def with a custom command 00:01:44.826 [89/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:44.826 [90/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:44.826 [91/740] Generating lib/rte_cfgfile_mingw with a custom command 00:01:44.826 [92/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:44.826 [93/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:44.826 [94/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:44.826 [95/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:44.826 [96/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:44.826 [97/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:44.826 [98/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:44.826 [99/740] Generating lib/rte_compressdev_def with a custom command 00:01:44.826 [100/740] Linking static target lib/librte_ring.a 00:01:44.826 [101/740] Generating lib/rte_compressdev_mingw with a custom command 00:01:44.826 [102/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:44.826 [103/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:44.826 [104/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:44.826 [105/740] Generating lib/rte_cryptodev_def with a custom command 00:01:44.826 [106/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:44.826 [107/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:44.826 [108/740] Generating lib/rte_cryptodev_mingw with a custom command 00:01:44.826 [109/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:44.826 [110/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:44.826 [111/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:44.826 [112/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:44.826 [113/740] Generating lib/rte_distributor_def with a custom command 00:01:44.826 [114/740] Generating lib/rte_distributor_mingw with a custom command 00:01:44.826 [115/740] Generating lib/rte_efd_mingw with a custom command 00:01:44.826 [116/740] Generating lib/rte_efd_def with a custom command 00:01:44.826 [117/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:44.826 [118/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:44.826 [119/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:45.094 [120/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:45.094 [121/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:45.094 [122/740] Generating lib/rte_eventdev_mingw with a custom command 00:01:45.094 [123/740] Generating lib/rte_eventdev_def with a custom command 00:01:45.094 [124/740] Generating lib/rte_gpudev_mingw with a custom command 00:01:45.094 [125/740] Generating lib/rte_gpudev_def with a custom command 00:01:45.094 [126/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:45.094 [127/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:45.094 [128/740] Generating lib/rte_gro_mingw with a custom command 00:01:45.094 [129/740] Generating lib/rte_gro_def with a custom command 00:01:45.094 [130/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:45.094 [131/740] Generating lib/rte_gso_mingw with a custom command 00:01:45.094 [132/740] Generating lib/rte_gso_def with a custom command 00:01:45.094 [133/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:45.094 [134/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:45.094 [135/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.094 [136/740] Generating lib/rte_ip_frag_def with a custom command 00:01:45.094 [137/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:45.094 [138/740] Generating lib/rte_ip_frag_mingw with a custom command 00:01:45.094 [139/740] Linking target lib/librte_kvargs.so.23.0 00:01:45.094 [140/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.094 [141/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.094 [142/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:45.094 [143/740] Generating lib/rte_jobstats_def with a custom command 00:01:45.094 [144/740] Generating lib/rte_jobstats_mingw with a custom command 00:01:45.094 [145/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:45.356 [146/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:45.356 [147/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:45.356 [148/740] Linking static target lib/librte_cfgfile.a 00:01:45.356 [149/740] Generating lib/rte_latencystats_def with a custom command 00:01:45.356 [150/740] Generating lib/rte_latencystats_mingw with a custom command 00:01:45.356 [151/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:45.356 [152/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:45.356 [153/740] Generating lib/rte_lpm_def with a custom command 00:01:45.356 [154/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:45.356 [155/740] Generating lib/rte_lpm_mingw with a custom command 00:01:45.356 [156/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:45.356 [157/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:45.356 [158/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:45.356 [159/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:45.356 [160/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.356 [161/740] Generating lib/rte_member_def with a custom command 00:01:45.356 [162/740] Generating lib/rte_member_mingw with a custom command 00:01:45.356 [163/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:45.356 [164/740] Generating lib/rte_pcapng_def with a custom command 00:01:45.356 [165/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:45.356 [166/740] Generating lib/rte_pcapng_mingw with a custom command 00:01:45.356 [167/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:45.356 [168/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:45.356 [169/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:45.356 [170/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:45.356 [171/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:45.356 [172/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:45.356 [173/740] Linking static target lib/librte_jobstats.a 00:01:45.356 [174/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:45.356 [175/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:45.356 [176/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:45.356 [177/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:45.356 [178/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:45.356 [179/740] Linking static target lib/librte_telemetry.a 00:01:45.356 [180/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:45.356 [181/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:45.356 [182/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:45.356 [183/740] Linking static target lib/librte_cmdline.a 00:01:45.356 [184/740] Generating lib/rte_power_mingw with a custom command 00:01:45.356 [185/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:45.356 [186/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:45.356 [187/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:45.356 [188/740] Generating lib/rte_power_def with a custom command 00:01:45.356 [189/740] Linking static target lib/librte_metrics.a 00:01:45.356 [190/740] Linking static target lib/librte_timer.a 00:01:45.356 [191/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:45.356 [192/740] Generating lib/rte_rawdev_def with a custom command 00:01:45.356 [193/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:45.356 [194/740] Generating lib/rte_rawdev_mingw with a custom command 00:01:45.356 [195/740] Generating lib/rte_regexdev_def with a custom command 00:01:45.356 [196/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:45.356 [197/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:45.356 [198/740] Generating lib/rte_regexdev_mingw with a custom command 00:01:45.356 [199/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:45.356 [200/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:45.356 [201/740] Generating lib/rte_dmadev_mingw with a custom command 00:01:45.356 [202/740] Generating lib/rte_dmadev_def with a custom command 00:01:45.356 [203/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:45.356 [204/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:45.356 [205/740] Generating lib/rte_rib_def with a custom command 00:01:45.356 [206/740] Generating lib/rte_rib_mingw with a custom command 00:01:45.356 [207/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:45.618 [208/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:45.618 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:45.618 [210/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:45.618 [211/740] Generating lib/rte_reorder_mingw with a custom command 00:01:45.618 [212/740] Generating lib/rte_reorder_def with a custom command 00:01:45.618 [213/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:45.618 [214/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:45.618 [215/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:45.618 [216/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:45.618 [217/740] Generating lib/rte_sched_def with a custom command 00:01:45.618 [218/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:45.618 [219/740] Linking static target lib/librte_net.a 00:01:45.618 [220/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:45.618 [221/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:45.618 [222/740] Generating lib/rte_sched_mingw with a custom command 00:01:45.618 [223/740] Generating lib/rte_security_def with a custom command 00:01:45.618 [224/740] Linking static target lib/librte_bitratestats.a 00:01:45.618 [225/740] Generating lib/rte_security_mingw with a custom command 00:01:45.618 [226/740] Generating lib/rte_stack_def with a custom command 00:01:45.618 [227/740] Generating lib/rte_stack_mingw with a custom command 00:01:45.618 [228/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:45.618 [229/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:45.618 [230/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:45.618 [231/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:45.618 [232/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:45.618 [233/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:45.618 [234/740] Generating lib/rte_vhost_def with a custom command 00:01:45.618 [235/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:45.618 [236/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:45.618 [237/740] Generating lib/rte_vhost_mingw with a custom command 00:01:45.618 [238/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:45.618 [239/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:45.618 [240/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:45.618 [241/740] Generating lib/rte_ipsec_mingw with a custom command 00:01:45.618 [242/740] Generating lib/rte_ipsec_def with a custom command 00:01:45.618 [243/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:45.618 [244/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:45.618 [245/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:45.618 [246/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:45.618 [247/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:45.618 [248/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:45.618 [249/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:45.618 [250/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:45.618 [251/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:45.618 [252/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:45.618 [253/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:45.618 [254/740] Generating lib/rte_fib_mingw with a custom command 00:01:45.618 [255/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:45.618 [256/740] Generating lib/rte_fib_def with a custom command 00:01:45.618 [257/740] Linking static target lib/librte_stack.a 00:01:45.618 [258/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:45.618 [259/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:45.618 [260/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:45.618 [261/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:45.618 [262/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:45.618 [263/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:45.618 [264/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:45.618 [265/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:45.618 [266/740] Generating lib/rte_port_def with a custom command 00:01:45.618 [267/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:45.618 [268/740] Linking static target lib/librte_compressdev.a 00:01:45.902 [269/740] Generating lib/rte_port_mingw with a custom command 00:01:45.902 [270/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:45.902 [271/740] Generating lib/rte_pdump_def with a custom command 00:01:45.902 [272/740] Generating lib/rte_pdump_mingw with a custom command 00:01:45.902 [273/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:45.902 [274/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:45.902 [275/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.902 [276/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:45.902 [277/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:45.902 [278/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.902 [279/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:45.902 [280/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:45.902 [281/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:45.902 [282/740] Linking static target lib/librte_rcu.a 00:01:45.902 [283/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:45.902 [284/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:45.902 [285/740] Linking static target lib/librte_mempool.a 00:01:45.902 [286/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:45.902 [287/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.902 [288/740] Linking static target lib/librte_rawdev.a 00:01:45.902 [289/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:45.902 [290/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:45.902 [291/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:45.902 [292/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:45.902 [293/740] Generating lib/rte_table_def with a custom command 00:01:45.902 [294/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.902 [295/740] Linking static target lib/librte_gro.a 00:01:45.902 [296/740] Linking static target lib/librte_bbdev.a 00:01:45.902 [297/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:45.902 [298/740] Linking static target lib/librte_dmadev.a 00:01:45.902 [299/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.902 [300/740] Generating lib/rte_table_mingw with a custom command 00:01:45.902 [301/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:45.902 [302/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:45.902 [303/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:45.902 [304/740] Linking static target lib/librte_gpudev.a 00:01:45.902 [305/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:45.902 [306/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:45.902 [307/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.902 [308/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:46.164 [309/740] Linking target lib/librte_telemetry.so.23.0 00:01:46.164 [310/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.164 [311/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.164 [312/740] Generating lib/rte_pipeline_def with a custom command 00:01:46.164 [313/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:46.164 [314/740] Generating lib/rte_pipeline_mingw with a custom command 00:01:46.164 [315/740] Linking static target lib/librte_gso.a 00:01:46.164 [316/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:46.164 [317/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:46.164 [318/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:46.164 [319/740] Linking static target lib/librte_latencystats.a 00:01:46.164 [320/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:46.164 [321/740] Generating lib/rte_graph_def with a custom command 00:01:46.164 [322/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:46.164 [323/740] Generating lib/rte_graph_mingw with a custom command 00:01:46.164 [324/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:46.164 [325/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:46.164 [326/740] Linking static target lib/librte_ip_frag.a 00:01:46.164 [327/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:46.164 [328/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:46.164 [329/740] Linking static target lib/librte_distributor.a 00:01:46.164 [330/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:46.164 [331/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:46.164 [332/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:46.164 [333/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:46.164 [334/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:46.164 [335/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:46.164 [336/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:46.164 [337/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:46.164 [338/740] Linking static target lib/librte_regexdev.a 00:01:46.164 [339/740] Generating lib/rte_node_def with a custom command 00:01:46.164 [340/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:46.164 [341/740] Generating lib/rte_node_mingw with a custom command 00:01:46.164 [342/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:46.164 [343/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.164 [344/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:46.164 [345/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:46.426 [346/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.426 [347/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:46.426 [348/740] Generating drivers/rte_bus_pci_def with a custom command 00:01:46.426 [349/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:46.426 [350/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:46.426 [351/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.426 [352/740] Linking static target lib/librte_eal.a 00:01:46.426 [353/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:46.426 [354/740] Generating drivers/rte_bus_vdev_def with a custom command 00:01:46.426 [355/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:46.426 [356/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:46.426 [357/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:46.426 [358/740] Linking static target lib/librte_reorder.a 00:01:46.426 [359/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:46.426 [360/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:46.426 [361/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:46.426 [362/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:46.426 [363/740] Linking static target lib/librte_power.a 00:01:46.426 [364/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:46.426 [365/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.426 [366/740] Generating drivers/rte_mempool_ring_def with a custom command 00:01:46.426 [367/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:46.426 [368/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:46.426 [369/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:46.426 [370/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:46.426 [371/740] Linking static target lib/librte_pcapng.a 00:01:46.426 [372/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:46.426 [373/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:46.426 [374/740] Linking static target lib/librte_security.a 00:01:46.426 [375/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:46.426 [376/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:46.426 [377/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:46.426 [378/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:46.427 [379/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:46.427 [380/740] Linking static target lib/librte_mbuf.a 00:01:46.427 [381/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:46.427 [382/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:46.427 [383/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:46.427 [384/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:46.688 [385/740] Linking static target lib/librte_bpf.a 00:01:46.688 [386/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.688 [387/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:46.688 [388/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.688 [389/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.688 [390/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:46.688 [391/740] Generating drivers/rte_net_i40e_def with a custom command 00:01:46.688 [392/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:46.688 [393/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:46.688 [394/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:46.688 [395/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:46.688 [396/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:46.688 [397/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:46.688 [398/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:46.688 [399/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:46.688 [400/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:46.688 [401/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:46.688 [402/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:46.688 [403/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:46.688 [404/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.688 [405/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:46.688 [406/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:46.688 [407/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:46.688 [408/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:46.688 [409/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:46.688 [410/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.688 [411/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:46.688 [412/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:46.688 [413/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:46.688 [414/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:46.688 [415/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:46.688 [416/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:46.688 [417/740] Linking static target lib/librte_rib.a 00:01:46.688 [418/740] Linking static target lib/librte_lpm.a 00:01:46.688 [419/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:46.688 [420/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.688 [421/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:46.949 [422/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:46.949 [423/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:46.949 [424/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:46.949 [425/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:46.949 [426/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:46.949 [427/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:46.949 [428/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:46.949 [429/740] Linking static target lib/librte_graph.a 00:01:46.949 [430/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:46.949 [431/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:46.949 [432/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.949 [433/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:46.949 [434/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:46.949 [435/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:46.949 [436/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:46.949 [437/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:46.949 [438/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.949 [439/740] Linking static target lib/librte_efd.a 00:01:46.949 [440/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:46.949 [441/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:46.949 [442/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:46.949 [443/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:46.949 [444/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:46.949 [445/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:46.949 [446/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.949 [447/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:46.949 [448/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.210 [449/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:47.210 [450/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.210 [451/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:47.210 [452/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:47.210 [453/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.210 [454/740] Linking static target drivers/librte_bus_vdev.a 00:01:47.210 [455/740] Linking static target lib/librte_fib.a 00:01:47.210 [456/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:47.210 [457/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:47.210 [458/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:47.210 [459/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:47.210 [460/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.210 [461/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.210 [462/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:47.475 [463/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:47.475 [464/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.475 [465/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.475 [466/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.475 [467/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:47.475 [468/740] Linking static target lib/librte_pdump.a 00:01:47.475 [469/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:47.475 [470/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:47.475 [471/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:47.475 [472/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:47.475 [473/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:47.475 [474/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.475 [475/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.475 [476/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.475 [477/740] Linking static target drivers/librte_bus_pci.a 00:01:47.475 [478/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:47.475 [479/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.475 [480/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:47.475 [481/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:47.475 [482/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.475 [483/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.475 [484/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:47.475 [485/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:47.475 [486/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:47.475 [487/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:47.735 [488/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:47.735 [489/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:47.735 [490/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:47.735 [491/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:47.735 [492/740] Linking static target lib/librte_table.a 00:01:47.735 [493/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:47.735 [494/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.735 [495/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:47.735 [496/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:47.735 [497/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:47.735 [498/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:47.735 [499/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:47.735 [500/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:47.735 [501/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:47.735 [502/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:47.735 [503/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:47.735 [504/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:47.735 [505/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:47.735 [506/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:47.735 [507/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:47.735 [508/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.735 [509/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:47.992 [510/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:47.992 [511/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:47.992 [512/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:47.992 [513/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:47.992 [514/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:47.992 [515/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.992 [516/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:47.992 [517/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:47.992 [518/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:47.992 [519/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:47.992 [520/740] Linking static target lib/librte_cryptodev.a 00:01:47.992 [521/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:47.992 [522/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:47.992 [523/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.992 [524/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:47.992 [525/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:47.992 [526/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:47.992 [527/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:47.992 [528/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:47.992 [529/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:47.992 [530/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:47.992 [531/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:47.992 [532/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:47.992 [533/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:47.992 [534/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:47.992 [535/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:47.992 [536/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:47.992 [537/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:47.992 [538/740] Linking static target lib/librte_sched.a 00:01:47.992 [539/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:48.274 [540/740] Linking static target lib/librte_node.a 00:01:48.274 [541/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:48.274 [542/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:48.274 [543/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.274 [544/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:48.274 [545/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.274 [546/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.274 [547/740] Linking static target drivers/librte_mempool_ring.a 00:01:48.274 [548/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:48.274 [549/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:48.274 [550/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:48.274 [551/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:48.274 [552/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:48.274 [553/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:48.274 [554/740] Linking static target lib/librte_ipsec.a 00:01:48.274 [555/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:48.274 [556/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:48.274 [557/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:48.274 [558/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:48.274 [559/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:48.274 [560/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:48.274 [561/740] Linking static target lib/librte_ethdev.a 00:01:48.274 [562/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:48.274 [563/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:48.274 [564/740] Linking static target lib/librte_member.a 00:01:48.274 [565/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:48.274 [566/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:48.274 [567/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:48.274 [568/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:48.274 [569/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:48.274 [570/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:48.274 [571/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.274 [572/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:48.274 [573/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:48.541 [574/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:48.541 [575/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:48.541 [576/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:48.541 [577/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:48.541 [578/740] Linking static target lib/librte_port.a 00:01:48.541 [579/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:48.541 [580/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.541 [581/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:48.541 [582/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:48.541 [583/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:48.541 [584/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:48.541 [585/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:48.541 [586/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:48.541 [587/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:48.541 [588/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:48.541 [589/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:48.541 [590/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:48.541 [591/740] Linking static target lib/librte_hash.a 00:01:48.541 [592/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:48.541 [593/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.541 [594/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:48.541 [595/740] Linking static target lib/librte_eventdev.a 00:01:48.541 [596/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:48.810 [597/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.810 [598/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:48.810 [599/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:48.810 [600/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:01:48.810 [601/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.810 [602/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:48.810 [603/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:48.810 [604/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:48.810 [605/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:48.810 [606/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:49.068 [607/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:49.068 [608/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:01:49.068 [609/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:49.068 [610/740] Linking static target lib/librte_acl.a 00:01:49.068 [611/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:49.068 [612/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.327 [613/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:49.327 [614/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:49.585 [615/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.585 [616/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:49.585 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:49.585 [618/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.151 [619/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:50.151 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:50.151 [621/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:50.717 [622/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:50.717 [623/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:50.976 [624/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:50.976 [625/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:50.976 [626/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:51.234 [627/740] Linking static target drivers/librte_net_i40e.a 00:01:51.234 [628/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.493 [629/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:51.752 [630/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:51.752 [631/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.011 [632/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:52.270 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.477 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.862 [635/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:57.862 [636/740] Linking static target lib/librte_vhost.a 00:01:58.122 [637/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:58.382 [638/740] Linking static target lib/librte_pipeline.a 00:01:58.641 [639/740] Linking target app/dpdk-test-flow-perf 00:01:58.641 [640/740] Linking target app/dpdk-test-cmdline 00:01:58.641 [641/740] Linking target app/dpdk-test-sad 00:01:58.641 [642/740] Linking target app/dpdk-test-eventdev 00:01:58.641 [643/740] Linking target app/dpdk-test-fib 00:01:58.641 [644/740] Linking target app/dpdk-test-regex 00:01:58.641 [645/740] Linking target app/dpdk-test-gpudev 00:01:58.641 [646/740] Linking target app/dpdk-dumpcap 00:01:58.641 [647/740] Linking target app/dpdk-pdump 00:01:58.641 [648/740] Linking target app/dpdk-test-compress-perf 00:01:58.641 [649/740] Linking target app/dpdk-test-acl 00:01:58.641 [650/740] Linking target app/dpdk-proc-info 00:01:58.641 [651/740] Linking target app/dpdk-test-pipeline 00:01:58.641 [652/740] Linking target app/dpdk-test-security-perf 00:01:58.641 [653/740] Linking target app/dpdk-test-crypto-perf 00:01:58.641 [654/740] Linking target app/dpdk-test-bbdev 00:01:58.641 [655/740] Linking target app/dpdk-testpmd 00:01:59.579 [656/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.838 [657/740] Linking target lib/librte_eal.so.23.0 00:01:59.838 [658/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:59.838 [659/740] Linking target lib/librte_pci.so.23.0 00:01:59.838 [660/740] Linking target lib/librte_jobstats.so.23.0 00:02:00.096 [661/740] Linking target lib/librte_ring.so.23.0 00:02:00.097 [662/740] Linking target lib/librte_stack.so.23.0 00:02:00.097 [663/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:00.097 [664/740] Linking target lib/librte_cfgfile.so.23.0 00:02:00.097 [665/740] Linking target lib/librte_meter.so.23.0 00:02:00.097 [666/740] Linking target lib/librte_timer.so.23.0 00:02:00.097 [667/740] Linking target lib/librte_rawdev.so.23.0 00:02:00.097 [668/740] Linking target lib/librte_graph.so.23.0 00:02:00.097 [669/740] Linking target lib/librte_dmadev.so.23.0 00:02:00.097 [670/740] Linking target lib/librte_acl.so.23.0 00:02:00.097 [671/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.097 [672/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:00.097 [673/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:00.097 [674/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:00.097 [675/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:00.097 [676/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:00.097 [677/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:00.097 [678/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:00.097 [679/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:00.097 [680/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:00.097 [681/740] Linking target lib/librte_mempool.so.23.0 00:02:00.097 [682/740] Linking target lib/librte_rcu.so.23.0 00:02:00.356 [683/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:00.356 [684/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:00.356 [685/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:00.356 [686/740] Linking target lib/librte_mbuf.so.23.0 00:02:00.356 [687/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:00.356 [688/740] Linking target lib/librte_rib.so.23.0 00:02:00.356 [689/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:00.356 [690/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:00.614 [691/740] Linking target lib/librte_gpudev.so.23.0 00:02:00.614 [692/740] Linking target lib/librte_cryptodev.so.23.0 00:02:00.614 [693/740] Linking target lib/librte_bbdev.so.23.0 00:02:00.614 [694/740] Linking target lib/librte_reorder.so.23.0 00:02:00.614 [695/740] Linking target lib/librte_regexdev.so.23.0 00:02:00.614 [696/740] Linking target lib/librte_sched.so.23.0 00:02:00.614 [697/740] Linking target lib/librte_compressdev.so.23.0 00:02:00.614 [698/740] Linking target lib/librte_distributor.so.23.0 00:02:00.614 [699/740] Linking target lib/librte_net.so.23.0 00:02:00.614 [700/740] Linking target lib/librte_fib.so.23.0 00:02:00.614 [701/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:00.614 [702/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:00.614 [703/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:00.614 [704/740] Linking target lib/librte_hash.so.23.0 00:02:00.614 [705/740] Linking target lib/librte_security.so.23.0 00:02:00.614 [706/740] Linking target lib/librte_cmdline.so.23.0 00:02:00.873 [707/740] Linking target lib/librte_ethdev.so.23.0 00:02:00.873 [708/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:00.873 [709/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:00.873 [710/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:00.873 [711/740] Linking target lib/librte_efd.so.23.0 00:02:00.873 [712/740] Linking target lib/librte_lpm.so.23.0 00:02:00.873 [713/740] Linking target lib/librte_member.so.23.0 00:02:00.873 [714/740] Linking target lib/librte_metrics.so.23.0 00:02:00.873 [715/740] Linking target lib/librte_ipsec.so.23.0 00:02:00.873 [716/740] Linking target lib/librte_gro.so.23.0 00:02:00.873 [717/740] Linking target lib/librte_power.so.23.0 00:02:00.873 [718/740] Linking target lib/librte_gso.so.23.0 00:02:00.873 [719/740] Linking target lib/librte_pcapng.so.23.0 00:02:00.873 [720/740] Linking target lib/librte_ip_frag.so.23.0 00:02:00.873 [721/740] Linking target lib/librte_bpf.so.23.0 00:02:00.873 [722/740] Linking target lib/librte_eventdev.so.23.0 00:02:00.873 [723/740] Linking target lib/librte_vhost.so.23.0 00:02:00.873 [724/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:01.133 [725/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:01.133 [726/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:01.133 [727/740] Linking target lib/librte_node.so.23.0 00:02:01.133 [728/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:01.133 [729/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:01.133 [730/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:01.133 [731/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:01.133 [732/740] Linking target lib/librte_bitratestats.so.23.0 00:02:01.133 [733/740] Linking target lib/librte_latencystats.so.23.0 00:02:01.133 [734/740] Linking target lib/librte_pdump.so.23.0 00:02:01.133 [735/740] Linking target lib/librte_port.so.23.0 00:02:01.393 [736/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:01.393 [737/740] Linking target lib/librte_table.so.23.0 00:02:01.393 [738/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:03.303 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.303 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:03.303 19:49:50 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp -j112 install 00:02:03.303 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp' 00:02:03.303 [0/1] Installing files. 00:02:03.564 Installing subdir /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples 00:02:03.564 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.564 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.564 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.564 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.564 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:03.565 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.566 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.567 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:03.568 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:03.568 Installing lib/librte_kvargs.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.568 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.568 Installing lib/librte_telemetry.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.568 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.568 Installing lib/librte_eal.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.568 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.568 Installing lib/librte_ring.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.568 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.568 Installing lib/librte_rcu.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.568 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.568 Installing lib/librte_mempool.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.568 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.568 Installing lib/librte_mbuf.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_net.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_meter.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_ethdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_pci.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_cmdline.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_metrics.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_hash.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_timer.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_acl.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_bbdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_bpf.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_compressdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_distributor.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_efd.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_eventdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_gpudev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_gro.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_gso.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_jobstats.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_latencystats.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_lpm.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_member.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_pcapng.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_power.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_rawdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_regexdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_dmadev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.569 Installing lib/librte_rib.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.830 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.830 Installing lib/librte_reorder.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.830 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.830 Installing lib/librte_sched.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.830 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.830 Installing lib/librte_security.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.830 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.830 Installing lib/librte_stack.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_vhost.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_ipsec.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_fib.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_port.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_pdump.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_table.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_pipeline.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_graph.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_node.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:03.831 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:03.831 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:03.831 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:03.831 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:03.831 Installing app/dpdk-dumpcap to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-pdump to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-proc-info to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-test-acl to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-test-fib to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-testpmd to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-test-regex to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-test-sad to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.831 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.832 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.833 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/bin 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig 00:02:03.834 Installing /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig 00:02:03.834 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:03.834 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:03.834 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:03.834 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:03.834 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:03.834 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:03.834 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:03.834 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:03.834 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:03.834 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:03.834 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:03.834 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:03.834 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:03.834 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:03.834 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:03.834 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_net.so 00:02:03.835 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:03.835 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:03.835 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:03.835 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:03.835 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:03.835 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:03.835 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:03.835 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:03.835 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:03.835 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:03.835 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:03.835 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:03.835 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:03.835 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:03.835 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:03.835 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:03.835 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:03.835 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:03.835 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:03.835 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:03.835 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:03.835 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:03.835 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:03.835 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:03.835 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:03.835 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:03.835 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:03.835 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:03.835 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:03.835 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:03.835 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:03.835 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:03.835 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:03.835 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:03.835 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:03.835 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:03.835 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:03.835 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:03.835 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:03.835 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:03.835 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:03.835 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:03.835 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:03.835 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:03.835 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:03.835 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:03.835 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:03.835 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:03.835 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:03.835 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_member.so 00:02:03.835 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:03.835 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:03.835 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:03.835 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_power.so 00:02:03.835 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:03.835 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:03.835 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:03.835 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:03.835 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:03.835 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:03.835 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:03.835 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:03.835 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:03.835 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:03.835 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:03.835 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:03.835 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:03.835 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_security.so 00:02:03.835 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:03.835 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:03.835 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:03.835 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:03.835 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:03.835 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:03.835 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:03.835 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:03.835 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:03.835 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:03.835 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:03.835 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:03.835 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:03.835 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:03.835 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:03.835 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:03.835 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:03.835 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:03.836 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:03.836 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:03.836 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:03.836 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_port.so 00:02:03.836 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:03.836 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:03.836 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:03.836 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_table.so 00:02:03.836 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:03.836 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:03.836 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:03.836 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:03.836 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:03.836 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/librte_node.so 00:02:03.836 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:03.836 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:03.836 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:03.836 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:03.836 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:03.836 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:03.836 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:03.836 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:03.836 Running custom install script '/bin/sh /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:03.836 19:49:51 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:03.836 19:49:51 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:03.836 19:49:51 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:03.836 19:49:51 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:03.836 00:02:03.836 real 0m25.235s 00:02:03.836 user 6m32.442s 00:02:03.836 sys 2m10.259s 00:02:03.836 19:49:51 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:03.836 19:49:51 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:03.836 ************************************ 00:02:03.836 END TEST build_native_dpdk 00:02:03.836 ************************************ 00:02:04.095 19:49:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:04.095 19:49:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:04.095 19:49:51 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:02:04.095 19:49:51 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:02:04.095 19:49:51 -- common/autobuild_common.sh@425 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:02:04.095 19:49:51 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:04.095 19:49:51 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:04.095 19:49:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.095 ************************************ 00:02:04.095 START TEST autobuild_llvm_precompile 00:02:04.095 ************************************ 00:02:04.095 19:49:51 autobuild_llvm_precompile -- common/autotest_common.sh@1121 -- $ _llvm_precompile 00:02:04.095 19:49:51 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:02:04.095 19:49:51 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:02:04.095 Target: x86_64-redhat-linux-gnu 00:02:04.095 Thread model: posix 00:02:04.095 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:02:04.095 19:49:51 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:02:04.095 19:49:51 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:02:04.095 19:49:51 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:02:04.095 19:49:51 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:02:04.095 19:49:51 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:02:04.095 19:49:51 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:02:04.095 19:49:51 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:04.095 19:49:51 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:02:04.095 19:49:51 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:02:04.095 19:49:51 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:04.354 Using /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:04.354 DPDK libraries: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:04.354 DPDK includes: //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:04.614 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:04.873 Using 'verbs' RDMA provider 00:02:21.145 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:36.053 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:36.053 Creating mk/config.mk...done. 00:02:36.053 Creating mk/cc.flags.mk...done. 00:02:36.053 Type 'make' to build. 00:02:36.053 00:02:36.053 real 0m30.162s 00:02:36.053 user 0m12.904s 00:02:36.053 sys 0m16.655s 00:02:36.053 19:50:21 autobuild_llvm_precompile -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:36.053 19:50:21 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:02:36.053 ************************************ 00:02:36.053 END TEST autobuild_llvm_precompile 00:02:36.053 ************************************ 00:02:36.053 19:50:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:36.053 19:50:21 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:36.053 19:50:21 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:36.053 19:50:21 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:02:36.053 19:50:21 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:36.053 Using /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:36.053 DPDK libraries: /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:02:36.053 DPDK includes: //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:02:36.053 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:36.053 Using 'verbs' RDMA provider 00:02:48.348 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:00.556 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:00.556 Creating mk/config.mk...done. 00:03:00.556 Creating mk/cc.flags.mk...done. 00:03:00.556 Type 'make' to build. 00:03:00.556 19:50:47 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:03:00.556 19:50:47 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:00.556 19:50:47 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:00.556 19:50:47 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.556 ************************************ 00:03:00.556 START TEST make 00:03:00.556 ************************************ 00:03:00.556 19:50:47 make -- common/autotest_common.sh@1121 -- $ make -j112 00:03:00.556 make[1]: Nothing to be done for 'all'. 00:03:01.934 The Meson build system 00:03:01.935 Version: 1.3.1 00:03:01.935 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:03:01.935 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:01.935 Build type: native build 00:03:01.935 Project name: libvfio-user 00:03:01.935 Project version: 0.0.1 00:03:01.935 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:03:01.935 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:03:01.935 Host machine cpu family: x86_64 00:03:01.935 Host machine cpu: x86_64 00:03:01.935 Run-time dependency threads found: YES 00:03:01.935 Library dl found: YES 00:03:01.935 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:01.935 Run-time dependency json-c found: YES 0.17 00:03:01.935 Run-time dependency cmocka found: YES 1.1.7 00:03:01.935 Program pytest-3 found: NO 00:03:01.935 Program flake8 found: NO 00:03:01.935 Program misspell-fixer found: NO 00:03:01.935 Program restructuredtext-lint found: NO 00:03:01.935 Program valgrind found: YES (/usr/bin/valgrind) 00:03:01.935 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:01.935 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:01.935 Compiler for C supports arguments -Wwrite-strings: YES 00:03:01.935 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:01.935 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:01.935 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:01.935 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:01.935 Build targets in project: 8 00:03:01.935 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:01.935 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:01.935 00:03:01.935 libvfio-user 0.0.1 00:03:01.935 00:03:01.935 User defined options 00:03:01.935 buildtype : debug 00:03:01.935 default_library: static 00:03:01.935 libdir : /usr/local/lib 00:03:01.935 00:03:01.935 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:02.193 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:02.193 [1/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:03:02.193 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:03:02.193 [3/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:03:02.193 [4/36] Compiling C object samples/null.p/null.c.o 00:03:02.193 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:03:02.193 [6/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:02.193 [7/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:02.193 [8/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:02.193 [9/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:02.193 [10/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:02.193 [11/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:03:02.193 [12/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:02.193 [13/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:03:02.193 [14/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:02.193 [15/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:02.193 [16/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:02.193 [17/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:03:02.193 [18/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:03:02.193 [19/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:02.193 [20/36] Compiling C object test/unit_tests.p/mocks.c.o 00:03:02.193 [21/36] Compiling C object samples/server.p/server.c.o 00:03:02.193 [22/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:02.193 [23/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:02.193 [24/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:02.193 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:02.193 [26/36] Compiling C object samples/client.p/client.c.o 00:03:02.193 [27/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:02.193 [28/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:03:02.193 [29/36] Linking static target lib/libvfio-user.a 00:03:02.193 [30/36] Linking target samples/client 00:03:02.193 [31/36] Linking target samples/shadow_ioeventfd_server 00:03:02.193 [32/36] Linking target test/unit_tests 00:03:02.193 [33/36] Linking target samples/lspci 00:03:02.193 [34/36] Linking target samples/server 00:03:02.452 [35/36] Linking target samples/null 00:03:02.452 [36/36] Linking target samples/gpio-pci-idio-16 00:03:02.452 INFO: autodetecting backend as ninja 00:03:02.452 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:02.452 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:02.711 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:02.711 ninja: no work to do. 00:03:05.996 CC lib/log/log.o 00:03:05.996 CC lib/log/log_deprecated.o 00:03:05.996 CC lib/log/log_flags.o 00:03:05.996 CC lib/ut/ut.o 00:03:05.996 CC lib/ut_mock/mock.o 00:03:05.996 LIB libspdk_ut.a 00:03:05.996 LIB libspdk_log.a 00:03:05.996 LIB libspdk_ut_mock.a 00:03:06.254 CC lib/dma/dma.o 00:03:06.254 CC lib/ioat/ioat.o 00:03:06.254 CC lib/util/base64.o 00:03:06.254 CC lib/util/bit_array.o 00:03:06.254 CC lib/util/crc32.o 00:03:06.254 CC lib/util/cpuset.o 00:03:06.254 CC lib/util/crc16.o 00:03:06.254 CC lib/util/crc32_ieee.o 00:03:06.254 CC lib/util/crc64.o 00:03:06.254 CC lib/util/crc32c.o 00:03:06.254 CC lib/util/dif.o 00:03:06.254 CC lib/util/file.o 00:03:06.254 CC lib/util/fd.o 00:03:06.254 CC lib/util/hexlify.o 00:03:06.254 CXX lib/trace_parser/trace.o 00:03:06.254 CC lib/util/iov.o 00:03:06.254 CC lib/util/pipe.o 00:03:06.254 CC lib/util/math.o 00:03:06.254 CC lib/util/strerror_tls.o 00:03:06.254 CC lib/util/string.o 00:03:06.254 CC lib/util/uuid.o 00:03:06.254 CC lib/util/fd_group.o 00:03:06.254 CC lib/util/xor.o 00:03:06.254 CC lib/util/zipf.o 00:03:06.254 CC lib/vfio_user/host/vfio_user_pci.o 00:03:06.254 CC lib/vfio_user/host/vfio_user.o 00:03:06.254 LIB libspdk_dma.a 00:03:06.254 LIB libspdk_ioat.a 00:03:06.513 LIB libspdk_vfio_user.a 00:03:06.513 LIB libspdk_util.a 00:03:06.513 LIB libspdk_trace_parser.a 00:03:06.770 CC lib/json/json_parse.o 00:03:06.770 CC lib/json/json_write.o 00:03:06.770 CC lib/json/json_util.o 00:03:06.770 CC lib/rdma/common.o 00:03:06.770 CC lib/rdma/rdma_verbs.o 00:03:06.770 CC lib/idxd/idxd_user.o 00:03:06.770 CC lib/idxd/idxd.o 00:03:06.770 CC lib/conf/conf.o 00:03:06.770 CC lib/idxd/idxd_kernel.o 00:03:06.770 CC lib/vmd/vmd.o 00:03:06.770 CC lib/vmd/led.o 00:03:06.770 CC lib/env_dpdk/memory.o 00:03:06.770 CC lib/env_dpdk/env.o 00:03:06.770 CC lib/env_dpdk/init.o 00:03:06.770 CC lib/env_dpdk/pci.o 00:03:06.770 CC lib/env_dpdk/threads.o 00:03:06.770 CC lib/env_dpdk/pci_ioat.o 00:03:06.770 CC lib/env_dpdk/pci_virtio.o 00:03:06.770 CC lib/env_dpdk/pci_vmd.o 00:03:06.770 CC lib/env_dpdk/pci_idxd.o 00:03:06.770 CC lib/env_dpdk/pci_event.o 00:03:06.770 CC lib/env_dpdk/sigbus_handler.o 00:03:06.770 CC lib/env_dpdk/pci_dpdk.o 00:03:06.770 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:06.770 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:07.027 LIB libspdk_conf.a 00:03:07.027 LIB libspdk_rdma.a 00:03:07.027 LIB libspdk_json.a 00:03:07.027 LIB libspdk_idxd.a 00:03:07.027 LIB libspdk_vmd.a 00:03:07.283 CC lib/jsonrpc/jsonrpc_server.o 00:03:07.283 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:07.283 CC lib/jsonrpc/jsonrpc_client.o 00:03:07.283 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:07.539 LIB libspdk_jsonrpc.a 00:03:07.795 LIB libspdk_env_dpdk.a 00:03:07.795 CC lib/rpc/rpc.o 00:03:07.795 LIB libspdk_rpc.a 00:03:08.358 CC lib/notify/notify_rpc.o 00:03:08.358 CC lib/notify/notify.o 00:03:08.358 CC lib/trace/trace.o 00:03:08.358 CC lib/trace/trace_flags.o 00:03:08.358 CC lib/trace/trace_rpc.o 00:03:08.358 CC lib/keyring/keyring.o 00:03:08.358 CC lib/keyring/keyring_rpc.o 00:03:08.358 LIB libspdk_notify.a 00:03:08.358 LIB libspdk_trace.a 00:03:08.358 LIB libspdk_keyring.a 00:03:08.615 CC lib/thread/thread.o 00:03:08.616 CC lib/thread/iobuf.o 00:03:08.616 CC lib/sock/sock.o 00:03:08.616 CC lib/sock/sock_rpc.o 00:03:08.872 LIB libspdk_sock.a 00:03:09.437 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:09.437 CC lib/nvme/nvme_ctrlr.o 00:03:09.437 CC lib/nvme/nvme_fabric.o 00:03:09.437 CC lib/nvme/nvme_pcie_common.o 00:03:09.437 CC lib/nvme/nvme_ns_cmd.o 00:03:09.437 CC lib/nvme/nvme_ns.o 00:03:09.437 CC lib/nvme/nvme_qpair.o 00:03:09.437 CC lib/nvme/nvme_pcie.o 00:03:09.437 CC lib/nvme/nvme.o 00:03:09.437 CC lib/nvme/nvme_quirks.o 00:03:09.437 CC lib/nvme/nvme_transport.o 00:03:09.437 CC lib/nvme/nvme_discovery.o 00:03:09.437 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:09.437 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:09.437 CC lib/nvme/nvme_tcp.o 00:03:09.437 CC lib/nvme/nvme_opal.o 00:03:09.437 CC lib/nvme/nvme_io_msg.o 00:03:09.437 CC lib/nvme/nvme_poll_group.o 00:03:09.437 CC lib/nvme/nvme_zns.o 00:03:09.437 CC lib/nvme/nvme_stubs.o 00:03:09.437 CC lib/nvme/nvme_auth.o 00:03:09.437 CC lib/nvme/nvme_cuse.o 00:03:09.437 CC lib/nvme/nvme_vfio_user.o 00:03:09.437 CC lib/nvme/nvme_rdma.o 00:03:09.437 LIB libspdk_thread.a 00:03:09.696 CC lib/blob/blobstore.o 00:03:09.696 CC lib/blob/request.o 00:03:09.696 CC lib/blob/zeroes.o 00:03:09.696 CC lib/blob/blob_bs_dev.o 00:03:09.696 CC lib/accel/accel.o 00:03:09.696 CC lib/accel/accel_rpc.o 00:03:09.696 CC lib/accel/accel_sw.o 00:03:09.696 CC lib/vfu_tgt/tgt_endpoint.o 00:03:09.696 CC lib/vfu_tgt/tgt_rpc.o 00:03:09.696 CC lib/init/json_config.o 00:03:09.696 CC lib/init/subsystem.o 00:03:09.696 CC lib/init/subsystem_rpc.o 00:03:09.696 CC lib/init/rpc.o 00:03:09.696 CC lib/virtio/virtio.o 00:03:09.696 CC lib/virtio/virtio_vhost_user.o 00:03:09.696 CC lib/virtio/virtio_vfio_user.o 00:03:09.696 CC lib/virtio/virtio_pci.o 00:03:09.954 LIB libspdk_init.a 00:03:09.954 LIB libspdk_vfu_tgt.a 00:03:09.954 LIB libspdk_virtio.a 00:03:10.212 CC lib/event/reactor.o 00:03:10.212 CC lib/event/log_rpc.o 00:03:10.212 CC lib/event/app.o 00:03:10.212 CC lib/event/app_rpc.o 00:03:10.212 CC lib/event/scheduler_static.o 00:03:10.471 LIB libspdk_accel.a 00:03:10.471 LIB libspdk_event.a 00:03:10.471 LIB libspdk_nvme.a 00:03:10.729 CC lib/bdev/bdev.o 00:03:10.729 CC lib/bdev/bdev_rpc.o 00:03:10.729 CC lib/bdev/bdev_zone.o 00:03:10.729 CC lib/bdev/part.o 00:03:10.729 CC lib/bdev/scsi_nvme.o 00:03:11.662 LIB libspdk_blob.a 00:03:11.662 CC lib/blobfs/blobfs.o 00:03:11.662 CC lib/blobfs/tree.o 00:03:11.662 CC lib/lvol/lvol.o 00:03:12.228 LIB libspdk_lvol.a 00:03:12.228 LIB libspdk_blobfs.a 00:03:12.486 LIB libspdk_bdev.a 00:03:12.745 CC lib/ublk/ublk.o 00:03:12.745 CC lib/ublk/ublk_rpc.o 00:03:12.745 CC lib/nvmf/ctrlr_discovery.o 00:03:12.745 CC lib/nvmf/ctrlr.o 00:03:12.745 CC lib/nvmf/ctrlr_bdev.o 00:03:12.745 CC lib/nvmf/subsystem.o 00:03:12.745 CC lib/nvmf/nvmf_rpc.o 00:03:12.745 CC lib/nvmf/nvmf.o 00:03:12.745 CC lib/nvmf/transport.o 00:03:12.745 CC lib/nvmf/tcp.o 00:03:12.745 CC lib/ftl/ftl_core.o 00:03:12.745 CC lib/nvmf/stubs.o 00:03:12.745 CC lib/ftl/ftl_init.o 00:03:12.745 CC lib/nvmf/vfio_user.o 00:03:12.745 CC lib/nvmf/mdns_server.o 00:03:12.745 CC lib/ftl/ftl_layout.o 00:03:12.745 CC lib/nbd/nbd.o 00:03:12.745 CC lib/nvmf/rdma.o 00:03:12.745 CC lib/nbd/nbd_rpc.o 00:03:12.745 CC lib/ftl/ftl_debug.o 00:03:12.745 CC lib/nvmf/auth.o 00:03:12.745 CC lib/ftl/ftl_io.o 00:03:12.745 CC lib/ftl/ftl_sb.o 00:03:12.745 CC lib/ftl/ftl_l2p.o 00:03:12.745 CC lib/ftl/ftl_l2p_flat.o 00:03:12.745 CC lib/ftl/ftl_nv_cache.o 00:03:12.745 CC lib/ftl/ftl_band_ops.o 00:03:12.745 CC lib/ftl/ftl_band.o 00:03:12.745 CC lib/scsi/dev.o 00:03:12.745 CC lib/scsi/lun.o 00:03:12.745 CC lib/scsi/port.o 00:03:12.745 CC lib/ftl/ftl_writer.o 00:03:12.745 CC lib/scsi/scsi.o 00:03:12.745 CC lib/ftl/ftl_rq.o 00:03:12.745 CC lib/scsi/scsi_bdev.o 00:03:12.745 CC lib/ftl/ftl_reloc.o 00:03:12.745 CC lib/scsi/scsi_pr.o 00:03:12.745 CC lib/ftl/ftl_l2p_cache.o 00:03:12.745 CC lib/scsi/scsi_rpc.o 00:03:12.745 CC lib/ftl/ftl_p2l.o 00:03:12.745 CC lib/scsi/task.o 00:03:12.745 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:12.745 CC lib/ftl/mngt/ftl_mngt.o 00:03:12.745 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:12.745 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:12.745 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:12.745 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:12.745 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:12.745 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:12.745 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:12.745 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:12.745 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:12.745 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:12.745 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:12.745 CC lib/ftl/utils/ftl_conf.o 00:03:12.745 CC lib/ftl/utils/ftl_md.o 00:03:12.745 CC lib/ftl/utils/ftl_bitmap.o 00:03:12.745 CC lib/ftl/utils/ftl_property.o 00:03:12.745 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:12.745 CC lib/ftl/utils/ftl_mempool.o 00:03:12.745 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:12.745 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:12.745 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:12.745 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:12.745 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:12.745 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:12.745 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:12.745 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:12.745 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:12.745 CC lib/ftl/base/ftl_base_dev.o 00:03:12.745 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:12.745 CC lib/ftl/ftl_trace.o 00:03:12.745 CC lib/ftl/base/ftl_base_bdev.o 00:03:13.003 LIB libspdk_nbd.a 00:03:13.259 LIB libspdk_scsi.a 00:03:13.260 LIB libspdk_ublk.a 00:03:13.526 LIB libspdk_ftl.a 00:03:13.526 CC lib/vhost/vhost.o 00:03:13.526 CC lib/iscsi/conn.o 00:03:13.526 CC lib/iscsi/init_grp.o 00:03:13.526 CC lib/vhost/vhost_blk.o 00:03:13.526 CC lib/iscsi/iscsi.o 00:03:13.526 CC lib/vhost/vhost_rpc.o 00:03:13.526 CC lib/vhost/vhost_scsi.o 00:03:13.526 CC lib/vhost/rte_vhost_user.o 00:03:13.526 CC lib/iscsi/md5.o 00:03:13.526 CC lib/iscsi/param.o 00:03:13.526 CC lib/iscsi/portal_grp.o 00:03:13.526 CC lib/iscsi/tgt_node.o 00:03:13.526 CC lib/iscsi/iscsi_subsystem.o 00:03:13.526 CC lib/iscsi/iscsi_rpc.o 00:03:13.526 CC lib/iscsi/task.o 00:03:14.088 LIB libspdk_nvmf.a 00:03:14.088 LIB libspdk_vhost.a 00:03:14.344 LIB libspdk_iscsi.a 00:03:14.912 CC module/vfu_device/vfu_virtio_blk.o 00:03:14.912 CC module/vfu_device/vfu_virtio.o 00:03:14.912 CC module/vfu_device/vfu_virtio_rpc.o 00:03:14.912 CC module/vfu_device/vfu_virtio_scsi.o 00:03:14.912 CC module/env_dpdk/env_dpdk_rpc.o 00:03:14.912 LIB libspdk_env_dpdk_rpc.a 00:03:14.912 CC module/blob/bdev/blob_bdev.o 00:03:14.912 CC module/keyring/file/keyring_rpc.o 00:03:14.912 CC module/accel/ioat/accel_ioat_rpc.o 00:03:14.912 CC module/accel/ioat/accel_ioat.o 00:03:14.912 CC module/keyring/file/keyring.o 00:03:14.912 CC module/accel/dsa/accel_dsa.o 00:03:14.912 CC module/accel/dsa/accel_dsa_rpc.o 00:03:14.912 CC module/accel/error/accel_error.o 00:03:14.912 CC module/accel/iaa/accel_iaa.o 00:03:14.912 CC module/accel/iaa/accel_iaa_rpc.o 00:03:14.912 CC module/scheduler/gscheduler/gscheduler.o 00:03:14.912 CC module/accel/error/accel_error_rpc.o 00:03:14.912 CC module/keyring/linux/keyring.o 00:03:14.912 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:14.912 CC module/keyring/linux/keyring_rpc.o 00:03:14.912 CC module/sock/posix/posix.o 00:03:14.912 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:15.171 LIB libspdk_scheduler_gscheduler.a 00:03:15.171 LIB libspdk_keyring_file.a 00:03:15.171 LIB libspdk_keyring_linux.a 00:03:15.171 LIB libspdk_scheduler_dpdk_governor.a 00:03:15.171 LIB libspdk_accel_error.a 00:03:15.171 LIB libspdk_accel_ioat.a 00:03:15.171 LIB libspdk_scheduler_dynamic.a 00:03:15.171 LIB libspdk_accel_iaa.a 00:03:15.171 LIB libspdk_blob_bdev.a 00:03:15.171 LIB libspdk_accel_dsa.a 00:03:15.171 LIB libspdk_vfu_device.a 00:03:15.440 LIB libspdk_sock_posix.a 00:03:15.440 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:15.440 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:15.440 CC module/bdev/raid/bdev_raid_rpc.o 00:03:15.440 CC module/bdev/error/vbdev_error.o 00:03:15.440 CC module/bdev/raid/bdev_raid.o 00:03:15.440 CC module/bdev/lvol/vbdev_lvol.o 00:03:15.440 CC module/bdev/error/vbdev_error_rpc.o 00:03:15.440 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:15.440 CC module/bdev/raid/bdev_raid_sb.o 00:03:15.440 CC module/bdev/nvme/bdev_nvme.o 00:03:15.440 CC module/bdev/raid/raid0.o 00:03:15.440 CC module/bdev/raid/raid1.o 00:03:15.440 CC module/bdev/raid/concat.o 00:03:15.440 CC module/bdev/null/bdev_null.o 00:03:15.440 CC module/blobfs/bdev/blobfs_bdev.o 00:03:15.440 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:15.440 CC module/bdev/nvme/nvme_rpc.o 00:03:15.440 CC module/bdev/nvme/bdev_mdns_client.o 00:03:15.440 CC module/bdev/null/bdev_null_rpc.o 00:03:15.440 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:15.440 CC module/bdev/nvme/vbdev_opal.o 00:03:15.440 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:15.440 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:15.440 CC module/bdev/gpt/gpt.o 00:03:15.440 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:15.440 CC module/bdev/gpt/vbdev_gpt.o 00:03:15.440 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:15.440 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:15.440 CC module/bdev/ftl/bdev_ftl.o 00:03:15.440 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:15.440 CC module/bdev/malloc/bdev_malloc.o 00:03:15.440 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:15.440 CC module/bdev/aio/bdev_aio.o 00:03:15.440 CC module/bdev/aio/bdev_aio_rpc.o 00:03:15.440 CC module/bdev/split/vbdev_split_rpc.o 00:03:15.440 CC module/bdev/passthru/vbdev_passthru.o 00:03:15.440 CC module/bdev/split/vbdev_split.o 00:03:15.440 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:15.440 CC module/bdev/delay/vbdev_delay.o 00:03:15.440 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:15.440 CC module/bdev/iscsi/bdev_iscsi.o 00:03:15.440 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:15.701 LIB libspdk_blobfs_bdev.a 00:03:15.701 LIB libspdk_bdev_error.a 00:03:15.701 LIB libspdk_bdev_split.a 00:03:15.701 LIB libspdk_bdev_null.a 00:03:15.701 LIB libspdk_bdev_gpt.a 00:03:15.701 LIB libspdk_bdev_ftl.a 00:03:15.701 LIB libspdk_bdev_zone_block.a 00:03:15.701 LIB libspdk_bdev_passthru.a 00:03:15.701 LIB libspdk_bdev_aio.a 00:03:15.701 LIB libspdk_bdev_iscsi.a 00:03:15.701 LIB libspdk_bdev_delay.a 00:03:15.958 LIB libspdk_bdev_malloc.a 00:03:15.958 LIB libspdk_bdev_lvol.a 00:03:15.958 LIB libspdk_bdev_virtio.a 00:03:16.216 LIB libspdk_bdev_raid.a 00:03:16.783 LIB libspdk_bdev_nvme.a 00:03:17.349 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:17.350 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:17.350 CC module/event/subsystems/keyring/keyring.o 00:03:17.350 CC module/event/subsystems/sock/sock.o 00:03:17.350 CC module/event/subsystems/vmd/vmd.o 00:03:17.350 CC module/event/subsystems/iobuf/iobuf.o 00:03:17.350 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:17.350 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:17.350 CC module/event/subsystems/scheduler/scheduler.o 00:03:17.607 LIB libspdk_event_vfu_tgt.a 00:03:17.607 LIB libspdk_event_vhost_blk.a 00:03:17.607 LIB libspdk_event_keyring.a 00:03:17.607 LIB libspdk_event_sock.a 00:03:17.607 LIB libspdk_event_vmd.a 00:03:17.607 LIB libspdk_event_scheduler.a 00:03:17.607 LIB libspdk_event_iobuf.a 00:03:17.867 CC module/event/subsystems/accel/accel.o 00:03:17.867 LIB libspdk_event_accel.a 00:03:18.433 CC module/event/subsystems/bdev/bdev.o 00:03:18.433 LIB libspdk_event_bdev.a 00:03:18.692 CC module/event/subsystems/nbd/nbd.o 00:03:18.692 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:18.692 CC module/event/subsystems/ublk/ublk.o 00:03:18.692 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:18.692 CC module/event/subsystems/scsi/scsi.o 00:03:18.951 LIB libspdk_event_ublk.a 00:03:18.951 LIB libspdk_event_nbd.a 00:03:18.951 LIB libspdk_event_scsi.a 00:03:18.951 LIB libspdk_event_nvmf.a 00:03:19.210 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:19.210 CC module/event/subsystems/iscsi/iscsi.o 00:03:19.210 LIB libspdk_event_vhost_scsi.a 00:03:19.468 LIB libspdk_event_iscsi.a 00:03:19.737 CXX app/trace/trace.o 00:03:19.737 CC app/spdk_nvme_perf/perf.o 00:03:19.737 CC app/spdk_nvme_discover/discovery_aer.o 00:03:19.737 CC app/spdk_top/spdk_top.o 00:03:19.737 CC app/spdk_nvme_identify/identify.o 00:03:19.737 CC app/trace_record/trace_record.o 00:03:19.737 CC app/spdk_lspci/spdk_lspci.o 00:03:19.737 TEST_HEADER include/spdk/accel.h 00:03:19.737 TEST_HEADER include/spdk/accel_module.h 00:03:19.737 TEST_HEADER include/spdk/assert.h 00:03:19.737 TEST_HEADER include/spdk/barrier.h 00:03:19.737 TEST_HEADER include/spdk/base64.h 00:03:19.737 TEST_HEADER include/spdk/bdev_module.h 00:03:19.737 TEST_HEADER include/spdk/bdev.h 00:03:19.737 TEST_HEADER include/spdk/bit_array.h 00:03:19.737 TEST_HEADER include/spdk/bdev_zone.h 00:03:19.737 CC app/nvmf_tgt/nvmf_main.o 00:03:19.737 TEST_HEADER include/spdk/bit_pool.h 00:03:19.737 TEST_HEADER include/spdk/blob_bdev.h 00:03:19.737 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:19.737 TEST_HEADER include/spdk/blobfs.h 00:03:19.737 TEST_HEADER include/spdk/conf.h 00:03:19.737 TEST_HEADER include/spdk/blob.h 00:03:19.737 TEST_HEADER include/spdk/config.h 00:03:19.737 TEST_HEADER include/spdk/crc16.h 00:03:19.737 TEST_HEADER include/spdk/cpuset.h 00:03:19.737 TEST_HEADER include/spdk/crc64.h 00:03:19.737 TEST_HEADER include/spdk/crc32.h 00:03:19.737 TEST_HEADER include/spdk/dif.h 00:03:19.737 TEST_HEADER include/spdk/dma.h 00:03:19.737 CC test/rpc_client/rpc_client_test.o 00:03:19.737 TEST_HEADER include/spdk/endian.h 00:03:19.737 TEST_HEADER include/spdk/env_dpdk.h 00:03:19.737 TEST_HEADER include/spdk/env.h 00:03:19.737 TEST_HEADER include/spdk/event.h 00:03:19.737 TEST_HEADER include/spdk/fd_group.h 00:03:19.737 TEST_HEADER include/spdk/fd.h 00:03:19.737 TEST_HEADER include/spdk/file.h 00:03:19.737 TEST_HEADER include/spdk/ftl.h 00:03:19.737 TEST_HEADER include/spdk/gpt_spec.h 00:03:19.737 CC app/iscsi_tgt/iscsi_tgt.o 00:03:19.737 TEST_HEADER include/spdk/histogram_data.h 00:03:19.738 TEST_HEADER include/spdk/hexlify.h 00:03:19.738 CC app/vhost/vhost.o 00:03:19.738 TEST_HEADER include/spdk/idxd.h 00:03:19.738 TEST_HEADER include/spdk/init.h 00:03:19.738 TEST_HEADER include/spdk/idxd_spec.h 00:03:19.738 TEST_HEADER include/spdk/ioat.h 00:03:19.738 CC app/spdk_tgt/spdk_tgt.o 00:03:19.738 TEST_HEADER include/spdk/ioat_spec.h 00:03:19.738 TEST_HEADER include/spdk/iscsi_spec.h 00:03:19.738 TEST_HEADER include/spdk/json.h 00:03:19.738 TEST_HEADER include/spdk/jsonrpc.h 00:03:19.738 TEST_HEADER include/spdk/keyring.h 00:03:19.738 TEST_HEADER include/spdk/keyring_module.h 00:03:19.738 TEST_HEADER include/spdk/likely.h 00:03:19.738 TEST_HEADER include/spdk/log.h 00:03:19.738 TEST_HEADER include/spdk/lvol.h 00:03:19.738 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:19.738 TEST_HEADER include/spdk/mmio.h 00:03:19.738 TEST_HEADER include/spdk/memory.h 00:03:19.738 TEST_HEADER include/spdk/notify.h 00:03:19.738 TEST_HEADER include/spdk/nbd.h 00:03:19.738 TEST_HEADER include/spdk/nvme.h 00:03:19.738 TEST_HEADER include/spdk/nvme_intel.h 00:03:19.738 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:19.738 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:19.738 TEST_HEADER include/spdk/nvme_spec.h 00:03:19.738 TEST_HEADER include/spdk/nvme_zns.h 00:03:19.738 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:19.738 CC app/spdk_dd/spdk_dd.o 00:03:19.738 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:19.738 TEST_HEADER include/spdk/nvmf.h 00:03:19.738 TEST_HEADER include/spdk/nvmf_spec.h 00:03:19.738 TEST_HEADER include/spdk/nvmf_transport.h 00:03:19.738 TEST_HEADER include/spdk/opal_spec.h 00:03:19.738 TEST_HEADER include/spdk/opal.h 00:03:19.738 TEST_HEADER include/spdk/pci_ids.h 00:03:19.738 TEST_HEADER include/spdk/pipe.h 00:03:19.738 TEST_HEADER include/spdk/queue.h 00:03:19.738 TEST_HEADER include/spdk/reduce.h 00:03:19.738 TEST_HEADER include/spdk/rpc.h 00:03:19.738 TEST_HEADER include/spdk/scheduler.h 00:03:19.738 TEST_HEADER include/spdk/scsi_spec.h 00:03:19.738 TEST_HEADER include/spdk/scsi.h 00:03:19.738 TEST_HEADER include/spdk/stdinc.h 00:03:19.738 TEST_HEADER include/spdk/sock.h 00:03:19.738 TEST_HEADER include/spdk/string.h 00:03:19.738 TEST_HEADER include/spdk/thread.h 00:03:19.738 TEST_HEADER include/spdk/trace.h 00:03:19.738 TEST_HEADER include/spdk/trace_parser.h 00:03:19.738 TEST_HEADER include/spdk/ublk.h 00:03:19.738 TEST_HEADER include/spdk/tree.h 00:03:19.738 TEST_HEADER include/spdk/util.h 00:03:19.738 TEST_HEADER include/spdk/uuid.h 00:03:19.738 TEST_HEADER include/spdk/version.h 00:03:19.738 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:19.738 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:19.738 TEST_HEADER include/spdk/vhost.h 00:03:19.738 TEST_HEADER include/spdk/vmd.h 00:03:19.738 TEST_HEADER include/spdk/xor.h 00:03:19.738 TEST_HEADER include/spdk/zipf.h 00:03:19.738 CXX test/cpp_headers/accel.o 00:03:19.738 CXX test/cpp_headers/accel_module.o 00:03:19.738 CXX test/cpp_headers/assert.o 00:03:19.738 CXX test/cpp_headers/barrier.o 00:03:19.738 CXX test/cpp_headers/base64.o 00:03:19.738 CXX test/cpp_headers/bdev.o 00:03:19.738 CXX test/cpp_headers/bdev_module.o 00:03:19.738 CXX test/cpp_headers/bdev_zone.o 00:03:19.738 CXX test/cpp_headers/bit_array.o 00:03:19.738 CXX test/cpp_headers/bit_pool.o 00:03:19.738 CXX test/cpp_headers/blob_bdev.o 00:03:19.738 CXX test/cpp_headers/blobfs_bdev.o 00:03:19.738 CXX test/cpp_headers/blobfs.o 00:03:19.738 CXX test/cpp_headers/blob.o 00:03:19.738 CXX test/cpp_headers/conf.o 00:03:19.738 CXX test/cpp_headers/config.o 00:03:19.738 CXX test/cpp_headers/cpuset.o 00:03:19.738 CXX test/cpp_headers/crc16.o 00:03:19.738 CXX test/cpp_headers/crc32.o 00:03:19.738 CXX test/cpp_headers/crc64.o 00:03:19.738 CXX test/cpp_headers/dif.o 00:03:19.738 CXX test/cpp_headers/dma.o 00:03:19.738 CXX test/cpp_headers/endian.o 00:03:19.738 CXX test/cpp_headers/env_dpdk.o 00:03:19.738 CXX test/cpp_headers/env.o 00:03:19.738 CXX test/cpp_headers/event.o 00:03:19.738 CXX test/cpp_headers/fd_group.o 00:03:19.738 CXX test/cpp_headers/fd.o 00:03:19.738 CXX test/cpp_headers/file.o 00:03:19.738 CXX test/cpp_headers/ftl.o 00:03:19.738 CXX test/cpp_headers/gpt_spec.o 00:03:19.738 CXX test/cpp_headers/hexlify.o 00:03:19.738 CC examples/util/zipf/zipf.o 00:03:19.738 CXX test/cpp_headers/histogram_data.o 00:03:19.738 CC examples/sock/hello_world/hello_sock.o 00:03:19.738 CXX test/cpp_headers/idxd.o 00:03:19.738 CC examples/ioat/perf/perf.o 00:03:19.738 CC examples/nvme/arbitration/arbitration.o 00:03:19.738 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:19.738 CC examples/idxd/perf/perf.o 00:03:19.738 CXX test/cpp_headers/idxd_spec.o 00:03:19.738 CC examples/ioat/verify/verify.o 00:03:19.738 CC examples/nvme/reconnect/reconnect.o 00:03:19.738 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:19.738 CXX test/cpp_headers/init.o 00:03:19.738 CC examples/nvme/hello_world/hello_world.o 00:03:19.738 CC examples/nvme/hotplug/hotplug.o 00:03:19.738 CC examples/vmd/led/led.o 00:03:19.738 CC examples/vmd/lsvmd/lsvmd.o 00:03:19.738 CC app/fio/nvme/fio_plugin.o 00:03:19.738 CC examples/nvme/abort/abort.o 00:03:19.738 CC test/app/jsoncat/jsoncat.o 00:03:19.738 CC test/app/histogram_perf/histogram_perf.o 00:03:19.738 CC examples/accel/perf/accel_perf.o 00:03:19.738 CC test/thread/poller_perf/poller_perf.o 00:03:19.738 CC test/nvme/sgl/sgl.o 00:03:19.738 CC test/app/stub/stub.o 00:03:19.738 CC test/thread/lock/spdk_lock.o 00:03:19.738 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:19.738 CC test/nvme/err_injection/err_injection.o 00:03:19.738 CC test/nvme/reset/reset.o 00:03:19.738 CC test/event/event_perf/event_perf.o 00:03:19.738 CC test/nvme/reserve/reserve.o 00:03:19.738 CC test/nvme/aer/aer.o 00:03:19.738 CC test/nvme/e2edp/nvme_dp.o 00:03:19.738 CC test/nvme/startup/startup.o 00:03:19.738 CC test/env/memory/memory_ut.o 00:03:19.738 CC test/nvme/cuse/cuse.o 00:03:19.738 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:20.004 CC test/event/reactor_perf/reactor_perf.o 00:03:20.004 CC test/env/pci/pci_ut.o 00:03:20.004 CC test/nvme/connect_stress/connect_stress.o 00:03:20.004 CC test/nvme/boot_partition/boot_partition.o 00:03:20.004 CC test/nvme/overhead/overhead.o 00:03:20.004 CC test/nvme/simple_copy/simple_copy.o 00:03:20.004 CC test/nvme/compliance/nvme_compliance.o 00:03:20.004 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:20.004 CC test/event/reactor/reactor.o 00:03:20.004 CC test/nvme/fdp/fdp.o 00:03:20.004 CC test/env/vtophys/vtophys.o 00:03:20.004 CC examples/nvmf/nvmf/nvmf.o 00:03:20.004 CC examples/thread/thread/thread_ex.o 00:03:20.004 CC test/nvme/fused_ordering/fused_ordering.o 00:03:20.004 LINK spdk_lspci 00:03:20.004 CC examples/blob/cli/blobcli.o 00:03:20.004 CC examples/blob/hello_world/hello_blob.o 00:03:20.004 CC test/dma/test_dma/test_dma.o 00:03:20.004 CC test/event/app_repeat/app_repeat.o 00:03:20.004 CC examples/bdev/hello_world/hello_bdev.o 00:03:20.004 CC test/accel/dif/dif.o 00:03:20.004 CC app/fio/bdev/fio_plugin.o 00:03:20.004 CC test/bdev/bdevio/bdevio.o 00:03:20.004 CC test/event/scheduler/scheduler.o 00:03:20.004 CC test/app/bdev_svc/bdev_svc.o 00:03:20.004 CC test/blobfs/mkfs/mkfs.o 00:03:20.004 CC examples/bdev/bdevperf/bdevperf.o 00:03:20.004 CC test/env/mem_callbacks/mem_callbacks.o 00:03:20.004 LINK rpc_client_test 00:03:20.004 LINK nvmf_tgt 00:03:20.004 CC test/lvol/esnap/esnap.o 00:03:20.004 LINK spdk_nvme_discover 00:03:20.004 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:20.004 LINK spdk_trace_record 00:03:20.004 LINK vhost 00:03:20.004 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:20.004 LINK interrupt_tgt 00:03:20.004 LINK lsvmd 00:03:20.004 LINK zipf 00:03:20.004 LINK iscsi_tgt 00:03:20.004 LINK led 00:03:20.004 LINK jsoncat 00:03:20.004 CXX test/cpp_headers/ioat.o 00:03:20.004 LINK spdk_tgt 00:03:20.004 CXX test/cpp_headers/ioat_spec.o 00:03:20.004 CXX test/cpp_headers/iscsi_spec.o 00:03:20.004 CXX test/cpp_headers/json.o 00:03:20.004 CXX test/cpp_headers/jsonrpc.o 00:03:20.004 CXX test/cpp_headers/keyring.o 00:03:20.004 CXX test/cpp_headers/keyring_module.o 00:03:20.004 LINK histogram_perf 00:03:20.004 CXX test/cpp_headers/likely.o 00:03:20.004 LINK poller_perf 00:03:20.004 CXX test/cpp_headers/log.o 00:03:20.004 CXX test/cpp_headers/lvol.o 00:03:20.004 CXX test/cpp_headers/memory.o 00:03:20.004 CXX test/cpp_headers/mmio.o 00:03:20.004 LINK reactor_perf 00:03:20.004 CXX test/cpp_headers/nbd.o 00:03:20.004 CXX test/cpp_headers/notify.o 00:03:20.004 CXX test/cpp_headers/nvme.o 00:03:20.004 CXX test/cpp_headers/nvme_intel.o 00:03:20.004 CXX test/cpp_headers/nvme_ocssd.o 00:03:20.004 CXX test/cpp_headers/nvme_spec.o 00:03:20.004 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:20.004 CXX test/cpp_headers/nvme_zns.o 00:03:20.004 LINK vtophys 00:03:20.004 CXX test/cpp_headers/nvmf_cmd.o 00:03:20.004 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:20.004 CXX test/cpp_headers/nvmf.o 00:03:20.004 LINK event_perf 00:03:20.004 CXX test/cpp_headers/nvmf_spec.o 00:03:20.004 CXX test/cpp_headers/nvmf_transport.o 00:03:20.004 CXX test/cpp_headers/opal.o 00:03:20.004 CXX test/cpp_headers/opal_spec.o 00:03:20.004 CXX test/cpp_headers/pci_ids.o 00:03:20.004 LINK verify 00:03:20.004 LINK reactor 00:03:20.004 LINK stub 00:03:20.004 LINK env_dpdk_post_init 00:03:20.004 LINK app_repeat 00:03:20.004 CXX test/cpp_headers/pipe.o 00:03:20.004 LINK connect_stress 00:03:20.004 LINK ioat_perf 00:03:20.004 LINK pmr_persistence 00:03:20.004 fio_plugin.c:1559:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:03:20.004 struct spdk_nvme_fdp_ruhs ruhs; 00:03:20.004 ^ 00:03:20.004 LINK err_injection 00:03:20.004 LINK cmb_copy 00:03:20.004 CXX test/cpp_headers/reduce.o 00:03:20.004 LINK startup 00:03:20.004 CXX test/cpp_headers/queue.o 00:03:20.004 LINK doorbell_aers 00:03:20.271 LINK hotplug 00:03:20.271 CXX test/cpp_headers/rpc.o 00:03:20.271 CXX test/cpp_headers/scheduler.o 00:03:20.271 LINK boot_partition 00:03:20.271 LINK hello_world 00:03:20.271 LINK reserve 00:03:20.271 LINK hello_sock 00:03:20.271 LINK simple_copy 00:03:20.271 LINK fused_ordering 00:03:20.271 LINK spdk_trace 00:03:20.271 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:20.271 CXX test/cpp_headers/scsi.o 00:03:20.271 LINK bdev_svc 00:03:20.271 LINK nvme_dp 00:03:20.271 LINK thread 00:03:20.271 LINK mkfs 00:03:20.271 LINK hello_blob 00:03:20.271 LINK hello_bdev 00:03:20.271 LINK aer 00:03:20.271 LINK scheduler 00:03:20.271 LINK reset 00:03:20.271 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:03:20.271 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:03:20.271 LINK mem_callbacks 00:03:20.271 LINK sgl 00:03:20.271 LINK overhead 00:03:20.271 CXX test/cpp_headers/scsi_spec.o 00:03:20.271 LINK reconnect 00:03:20.271 CXX test/cpp_headers/sock.o 00:03:20.271 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:20.271 LINK nvmf 00:03:20.271 CXX test/cpp_headers/stdinc.o 00:03:20.271 CXX test/cpp_headers/string.o 00:03:20.271 LINK fdp 00:03:20.271 CXX test/cpp_headers/thread.o 00:03:20.271 CXX test/cpp_headers/trace.o 00:03:20.271 LINK idxd_perf 00:03:20.271 CXX test/cpp_headers/trace_parser.o 00:03:20.271 CXX test/cpp_headers/tree.o 00:03:20.271 CXX test/cpp_headers/ublk.o 00:03:20.271 CXX test/cpp_headers/util.o 00:03:20.271 CXX test/cpp_headers/uuid.o 00:03:20.271 CXX test/cpp_headers/version.o 00:03:20.271 LINK abort 00:03:20.271 CXX test/cpp_headers/vfio_user_pci.o 00:03:20.271 CXX test/cpp_headers/vfio_user_spec.o 00:03:20.271 CXX test/cpp_headers/vhost.o 00:03:20.271 CXX test/cpp_headers/vmd.o 00:03:20.271 CXX test/cpp_headers/xor.o 00:03:20.271 CXX test/cpp_headers/zipf.o 00:03:20.271 LINK spdk_dd 00:03:20.271 LINK test_dma 00:03:20.531 LINK arbitration 00:03:20.531 LINK nvme_manage 00:03:20.531 LINK pci_ut 00:03:20.531 LINK bdevio 00:03:20.531 LINK accel_perf 00:03:20.531 LINK dif 00:03:20.531 LINK nvme_compliance 00:03:20.531 LINK nvme_fuzz 00:03:20.531 LINK blobcli 00:03:20.531 1 warning generated. 00:03:20.790 LINK llvm_vfio_fuzz 00:03:20.790 LINK spdk_nvme 00:03:20.790 LINK spdk_nvme_identify 00:03:20.790 LINK spdk_bdev 00:03:20.790 LINK memory_ut 00:03:20.790 LINK vhost_fuzz 00:03:20.790 LINK spdk_nvme_perf 00:03:20.790 LINK bdevperf 00:03:21.061 LINK llvm_nvme_fuzz 00:03:21.061 LINK spdk_top 00:03:21.635 LINK cuse 00:03:21.635 LINK iscsi_fuzz 00:03:21.635 LINK spdk_lock 00:03:24.170 LINK esnap 00:03:24.170 00:03:24.170 real 0m24.364s 00:03:24.170 user 4m48.079s 00:03:24.170 sys 1m54.788s 00:03:24.170 19:51:11 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:24.170 19:51:11 make -- common/autotest_common.sh@10 -- $ set +x 00:03:24.170 ************************************ 00:03:24.170 END TEST make 00:03:24.170 ************************************ 00:03:24.170 19:51:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:24.170 19:51:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:24.170 19:51:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:24.170 19:51:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.170 19:51:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:24.170 19:51:11 -- pm/common@44 -- $ pid=3532777 00:03:24.170 19:51:11 -- pm/common@50 -- $ kill -TERM 3532777 00:03:24.170 19:51:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.170 19:51:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:24.170 19:51:11 -- pm/common@44 -- $ pid=3532778 00:03:24.170 19:51:11 -- pm/common@50 -- $ kill -TERM 3532778 00:03:24.170 19:51:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.170 19:51:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:24.170 19:51:11 -- pm/common@44 -- $ pid=3532780 00:03:24.170 19:51:11 -- pm/common@50 -- $ kill -TERM 3532780 00:03:24.170 19:51:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.170 19:51:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:24.170 19:51:11 -- pm/common@44 -- $ pid=3532802 00:03:24.170 19:51:11 -- pm/common@50 -- $ sudo -E kill -TERM 3532802 00:03:24.430 19:51:11 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:24.430 19:51:11 -- nvmf/common.sh@7 -- # uname -s 00:03:24.430 19:51:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:24.430 19:51:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:24.430 19:51:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:24.430 19:51:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:24.430 19:51:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:24.430 19:51:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:24.430 19:51:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:24.430 19:51:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:24.430 19:51:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:24.430 19:51:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:24.430 19:51:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:03:24.430 19:51:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:03:24.430 19:51:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:24.430 19:51:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:24.430 19:51:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:24.430 19:51:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:24.430 19:51:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:24.430 19:51:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:24.430 19:51:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:24.430 19:51:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:24.430 19:51:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.430 19:51:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.430 19:51:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.430 19:51:11 -- paths/export.sh@5 -- # export PATH 00:03:24.430 19:51:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.430 19:51:11 -- nvmf/common.sh@47 -- # : 0 00:03:24.430 19:51:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:24.430 19:51:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:24.430 19:51:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:24.430 19:51:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:24.430 19:51:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:24.430 19:51:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:24.430 19:51:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:24.430 19:51:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:24.430 19:51:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:24.430 19:51:11 -- spdk/autotest.sh@32 -- # uname -s 00:03:24.430 19:51:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:24.430 19:51:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:24.430 19:51:11 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:24.430 19:51:11 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:24.430 19:51:11 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:24.430 19:51:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:24.430 19:51:11 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:24.430 19:51:11 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:24.430 19:51:11 -- spdk/autotest.sh@48 -- # udevadm_pid=3608121 00:03:24.430 19:51:11 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:24.430 19:51:11 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:24.430 19:51:11 -- pm/common@17 -- # local monitor 00:03:24.430 19:51:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.430 19:51:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.430 19:51:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.430 19:51:11 -- pm/common@21 -- # date +%s 00:03:24.430 19:51:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.430 19:51:11 -- pm/common@21 -- # date +%s 00:03:24.430 19:51:11 -- pm/common@25 -- # sleep 1 00:03:24.430 19:51:11 -- pm/common@21 -- # date +%s 00:03:24.430 19:51:11 -- pm/common@21 -- # date +%s 00:03:24.430 19:51:11 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720893071 00:03:24.430 19:51:11 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720893071 00:03:24.430 19:51:11 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720893071 00:03:24.430 19:51:11 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720893071 00:03:24.430 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720893071_collect-vmstat.pm.log 00:03:24.430 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720893071_collect-cpu-load.pm.log 00:03:24.430 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720893071_collect-cpu-temp.pm.log 00:03:24.430 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720893071_collect-bmc-pm.bmc.pm.log 00:03:25.375 19:51:12 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:25.375 19:51:12 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:25.375 19:51:12 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:25.375 19:51:12 -- common/autotest_common.sh@10 -- # set +x 00:03:25.375 19:51:12 -- spdk/autotest.sh@59 -- # create_test_list 00:03:25.375 19:51:12 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:25.375 19:51:12 -- common/autotest_common.sh@10 -- # set +x 00:03:25.375 19:51:12 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:25.375 19:51:12 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:25.375 19:51:12 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:25.375 19:51:12 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:25.375 19:51:12 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:25.375 19:51:12 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:25.375 19:51:12 -- common/autotest_common.sh@1451 -- # uname 00:03:25.375 19:51:12 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:25.375 19:51:12 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:25.375 19:51:12 -- common/autotest_common.sh@1471 -- # uname 00:03:25.375 19:51:12 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:25.375 19:51:12 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:25.375 19:51:12 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:25.375 19:51:12 -- spdk/autotest.sh@72 -- # hash lcov 00:03:25.375 19:51:12 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:03:25.375 19:51:12 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:25.375 19:51:12 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:25.375 19:51:12 -- common/autotest_common.sh@10 -- # set +x 00:03:25.375 19:51:12 -- spdk/autotest.sh@91 -- # rm -f 00:03:25.375 19:51:13 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.664 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:28.664 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:28.664 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:28.664 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:28.664 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:28.664 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:28.664 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:28.664 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:28.922 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:28.922 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:28.922 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:28.922 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:28.922 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:28.922 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:28.922 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:28.922 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:28.922 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:28.922 19:51:16 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:28.922 19:51:16 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:28.922 19:51:16 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:28.922 19:51:16 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:28.922 19:51:16 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:28.922 19:51:16 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:28.922 19:51:16 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:28.922 19:51:16 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:28.922 19:51:16 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:28.922 19:51:16 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:28.922 19:51:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.922 19:51:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:28.922 19:51:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:28.922 19:51:16 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:28.922 19:51:16 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:29.180 No valid GPT data, bailing 00:03:29.180 19:51:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:29.180 19:51:16 -- scripts/common.sh@391 -- # pt= 00:03:29.180 19:51:16 -- scripts/common.sh@392 -- # return 1 00:03:29.180 19:51:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:29.180 1+0 records in 00:03:29.180 1+0 records out 00:03:29.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00198165 s, 529 MB/s 00:03:29.180 19:51:16 -- spdk/autotest.sh@118 -- # sync 00:03:29.180 19:51:16 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:29.180 19:51:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:29.180 19:51:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:35.747 19:51:22 -- spdk/autotest.sh@124 -- # uname -s 00:03:35.747 19:51:22 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:35.747 19:51:22 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:35.747 19:51:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:35.747 19:51:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:35.747 19:51:22 -- common/autotest_common.sh@10 -- # set +x 00:03:35.747 ************************************ 00:03:35.747 START TEST setup.sh 00:03:35.747 ************************************ 00:03:35.747 19:51:22 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:35.747 * Looking for test storage... 00:03:35.747 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:35.747 19:51:22 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:35.747 19:51:22 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:35.747 19:51:22 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:35.747 19:51:22 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:35.747 19:51:22 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:35.747 19:51:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:35.747 ************************************ 00:03:35.747 START TEST acl 00:03:35.747 ************************************ 00:03:35.747 19:51:23 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:35.747 * Looking for test storage... 00:03:35.747 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:35.747 19:51:23 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:35.747 19:51:23 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:35.747 19:51:23 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:35.747 19:51:23 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:35.747 19:51:23 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:35.747 19:51:23 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:35.747 19:51:23 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:35.747 19:51:23 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:35.747 19:51:23 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:35.747 19:51:23 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:35.747 19:51:23 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:35.747 19:51:23 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:35.747 19:51:23 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:35.747 19:51:23 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:35.747 19:51:23 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.747 19:51:23 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.947 19:51:26 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:39.947 19:51:26 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:39.947 19:51:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.947 19:51:26 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:39.947 19:51:26 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.947 19:51:26 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:43.239 Hugepages 00:03:43.239 node hugesize free / total 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 00:03:43.239 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:43.239 19:51:30 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:43.239 19:51:30 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:43.239 19:51:30 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:43.239 19:51:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:43.239 ************************************ 00:03:43.239 START TEST denied 00:03:43.239 ************************************ 00:03:43.239 19:51:30 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:03:43.239 19:51:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:43.239 19:51:30 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:43.239 19:51:30 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:43.239 19:51:30 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.239 19:51:30 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:46.575 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:46.575 19:51:33 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:46.575 19:51:33 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:46.575 19:51:33 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:46.575 19:51:33 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:46.575 19:51:33 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:46.575 19:51:33 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:46.575 19:51:33 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:46.575 19:51:33 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:46.575 19:51:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.575 19:51:33 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.767 00:03:50.767 real 0m7.911s 00:03:50.767 user 0m2.518s 00:03:50.767 sys 0m4.732s 00:03:50.767 19:51:38 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:50.767 19:51:38 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:50.767 ************************************ 00:03:50.767 END TEST denied 00:03:50.767 ************************************ 00:03:50.767 19:51:38 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:50.767 19:51:38 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:50.767 19:51:38 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:50.767 19:51:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:51.026 ************************************ 00:03:51.026 START TEST allowed 00:03:51.026 ************************************ 00:03:51.026 19:51:38 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:03:51.026 19:51:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:51.026 19:51:38 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:51.026 19:51:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:51.026 19:51:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.026 19:51:38 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:56.294 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:56.294 19:51:43 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:56.294 19:51:43 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:56.294 19:51:43 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:56.294 19:51:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.294 19:51:43 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.579 00:03:59.579 real 0m8.233s 00:03:59.579 user 0m2.195s 00:03:59.579 sys 0m4.484s 00:03:59.579 19:51:46 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:59.579 19:51:46 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:59.579 ************************************ 00:03:59.579 END TEST allowed 00:03:59.579 ************************************ 00:03:59.579 00:03:59.579 real 0m23.673s 00:03:59.579 user 0m7.493s 00:03:59.579 sys 0m14.235s 00:03:59.579 19:51:46 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:59.579 19:51:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:59.579 ************************************ 00:03:59.579 END TEST acl 00:03:59.579 ************************************ 00:03:59.579 19:51:46 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:59.579 19:51:46 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:59.579 19:51:46 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:59.579 19:51:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.579 ************************************ 00:03:59.579 START TEST hugepages 00:03:59.579 ************************************ 00:03:59.579 19:51:46 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:59.579 * Looking for test storage... 00:03:59.579 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 40540452 kB' 'MemAvailable: 42923844 kB' 'Buffers: 12536 kB' 'Cached: 11265052 kB' 'SwapCached: 16 kB' 'Active: 9496340 kB' 'Inactive: 2354388 kB' 'Active(anon): 9020988 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576492 kB' 'Mapped: 156284 kB' 'Shmem: 8504936 kB' 'KReclaimable: 248508 kB' 'Slab: 783264 kB' 'SReclaimable: 248508 kB' 'SUnreclaim: 534756 kB' 'KernelStack: 21840 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439068 kB' 'Committed_AS: 10433908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213364 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.579 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.580 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.581 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.581 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.581 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.581 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.581 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.581 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:59.581 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:59.581 19:51:46 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:59.581 19:51:46 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:59.581 19:51:46 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:59.581 19:51:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.581 ************************************ 00:03:59.581 START TEST default_setup 00:03:59.581 ************************************ 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.581 19:51:46 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:02.108 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:02.366 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:02.366 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:02.366 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:02.366 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:02.366 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:02.366 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:02.366 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:02.366 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:02.366 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:02.366 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:02.366 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:02.366 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:02.366 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:02.366 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:02.366 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:04.279 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42755968 kB' 'MemAvailable: 45139360 kB' 'Buffers: 12536 kB' 'Cached: 11265176 kB' 'SwapCached: 16 kB' 'Active: 9513276 kB' 'Inactive: 2354388 kB' 'Active(anon): 9037924 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593528 kB' 'Mapped: 156388 kB' 'Shmem: 8505060 kB' 'KReclaimable: 248508 kB' 'Slab: 781288 kB' 'SReclaimable: 248508 kB' 'SUnreclaim: 532780 kB' 'KernelStack: 22096 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10449408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213508 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.279 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.280 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42758676 kB' 'MemAvailable: 45142068 kB' 'Buffers: 12536 kB' 'Cached: 11265180 kB' 'SwapCached: 16 kB' 'Active: 9513288 kB' 'Inactive: 2354388 kB' 'Active(anon): 9037936 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593412 kB' 'Mapped: 156408 kB' 'Shmem: 8505064 kB' 'KReclaimable: 248508 kB' 'Slab: 781268 kB' 'SReclaimable: 248508 kB' 'SUnreclaim: 532760 kB' 'KernelStack: 22064 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10449424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213508 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.281 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42764992 kB' 'MemAvailable: 45148384 kB' 'Buffers: 12536 kB' 'Cached: 11265200 kB' 'SwapCached: 16 kB' 'Active: 9513468 kB' 'Inactive: 2354388 kB' 'Active(anon): 9038116 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593476 kB' 'Mapped: 156408 kB' 'Shmem: 8505084 kB' 'KReclaimable: 248508 kB' 'Slab: 781128 kB' 'SReclaimable: 248508 kB' 'SUnreclaim: 532620 kB' 'KernelStack: 22080 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10449448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213556 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.282 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.283 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.284 nr_hugepages=1024 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.284 resv_hugepages=0 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.284 surplus_hugepages=0 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.284 anon_hugepages=0 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.284 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42762964 kB' 'MemAvailable: 45146356 kB' 'Buffers: 12536 kB' 'Cached: 11265220 kB' 'SwapCached: 16 kB' 'Active: 9514020 kB' 'Inactive: 2354388 kB' 'Active(anon): 9038668 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593960 kB' 'Mapped: 156408 kB' 'Shmem: 8505104 kB' 'KReclaimable: 248508 kB' 'Slab: 781032 kB' 'SReclaimable: 248508 kB' 'SUnreclaim: 532524 kB' 'KernelStack: 22240 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10447976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213556 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.285 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 25895280 kB' 'MemUsed: 6696804 kB' 'SwapCached: 16 kB' 'Active: 2875444 kB' 'Inactive: 180704 kB' 'Active(anon): 2658824 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2787192 kB' 'Mapped: 96328 kB' 'AnonPages: 271640 kB' 'Shmem: 2389868 kB' 'KernelStack: 12616 kB' 'PageTables: 5196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127352 kB' 'Slab: 370536 kB' 'SReclaimable: 127352 kB' 'SUnreclaim: 243184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.286 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.287 node0=1024 expecting 1024 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.287 00:04:04.287 real 0m4.855s 00:04:04.287 user 0m1.176s 00:04:04.287 sys 0m2.122s 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:04.287 19:51:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:04.287 ************************************ 00:04:04.287 END TEST default_setup 00:04:04.287 ************************************ 00:04:04.287 19:51:51 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:04.287 19:51:51 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:04.287 19:51:51 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:04.288 19:51:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.288 ************************************ 00:04:04.288 START TEST per_node_1G_alloc 00:04:04.288 ************************************ 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.288 19:51:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:07.664 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.664 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:07.664 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:07.664 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42759124 kB' 'MemAvailable: 45142516 kB' 'Buffers: 12536 kB' 'Cached: 11265320 kB' 'SwapCached: 16 kB' 'Active: 9512360 kB' 'Inactive: 2354388 kB' 'Active(anon): 9037008 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592124 kB' 'Mapped: 155276 kB' 'Shmem: 8505204 kB' 'KReclaimable: 248508 kB' 'Slab: 781096 kB' 'SReclaimable: 248508 kB' 'SUnreclaim: 532588 kB' 'KernelStack: 21936 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10442980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213652 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.665 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42762416 kB' 'MemAvailable: 45145808 kB' 'Buffers: 12536 kB' 'Cached: 11265324 kB' 'SwapCached: 16 kB' 'Active: 9512784 kB' 'Inactive: 2354388 kB' 'Active(anon): 9037432 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592640 kB' 'Mapped: 155232 kB' 'Shmem: 8505208 kB' 'KReclaimable: 248508 kB' 'Slab: 781088 kB' 'SReclaimable: 248508 kB' 'SUnreclaim: 532580 kB' 'KernelStack: 21904 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10441256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.666 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.667 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42763108 kB' 'MemAvailable: 45146500 kB' 'Buffers: 12536 kB' 'Cached: 11265344 kB' 'SwapCached: 16 kB' 'Active: 9512480 kB' 'Inactive: 2354388 kB' 'Active(anon): 9037128 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592156 kB' 'Mapped: 155224 kB' 'Shmem: 8505228 kB' 'KReclaimable: 248508 kB' 'Slab: 781088 kB' 'SReclaimable: 248508 kB' 'SUnreclaim: 532580 kB' 'KernelStack: 21936 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10442772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213604 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.668 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.669 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.670 nr_hugepages=1024 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.670 resv_hugepages=0 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.670 surplus_hugepages=0 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.670 anon_hugepages=0 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42764476 kB' 'MemAvailable: 45147868 kB' 'Buffers: 12536 kB' 'Cached: 11265368 kB' 'SwapCached: 16 kB' 'Active: 9512808 kB' 'Inactive: 2354388 kB' 'Active(anon): 9037456 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592560 kB' 'Mapped: 155224 kB' 'Shmem: 8505252 kB' 'KReclaimable: 248508 kB' 'Slab: 781088 kB' 'SReclaimable: 248508 kB' 'SUnreclaim: 532580 kB' 'KernelStack: 21920 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10442796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.670 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.671 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26962704 kB' 'MemUsed: 5629380 kB' 'SwapCached: 16 kB' 'Active: 2873680 kB' 'Inactive: 180704 kB' 'Active(anon): 2657060 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2787244 kB' 'Mapped: 96052 kB' 'AnonPages: 270288 kB' 'Shmem: 2389920 kB' 'KernelStack: 12296 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127352 kB' 'Slab: 370920 kB' 'SReclaimable: 127352 kB' 'SUnreclaim: 243568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.672 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.673 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 15800700 kB' 'MemUsed: 11902448 kB' 'SwapCached: 0 kB' 'Active: 6639004 kB' 'Inactive: 2173684 kB' 'Active(anon): 6380272 kB' 'Inactive(anon): 57072 kB' 'Active(file): 258732 kB' 'Inactive(file): 2116612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8490676 kB' 'Mapped: 59172 kB' 'AnonPages: 322076 kB' 'Shmem: 6115332 kB' 'KernelStack: 9672 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121156 kB' 'Slab: 410168 kB' 'SReclaimable: 121156 kB' 'SUnreclaim: 289012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.674 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:07.675 node0=512 expecting 512 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:07.675 node1=512 expecting 512 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:07.675 00:04:07.675 real 0m3.167s 00:04:07.675 user 0m1.166s 00:04:07.675 sys 0m2.026s 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:07.675 19:51:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.675 ************************************ 00:04:07.675 END TEST per_node_1G_alloc 00:04:07.675 ************************************ 00:04:07.675 19:51:55 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:07.675 19:51:55 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:07.675 19:51:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:07.675 19:51:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.675 ************************************ 00:04:07.675 START TEST even_2G_alloc 00:04:07.675 ************************************ 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.675 19:51:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:10.970 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.970 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42796948 kB' 'MemAvailable: 45180340 kB' 'Buffers: 12536 kB' 'Cached: 11265492 kB' 'SwapCached: 16 kB' 'Active: 9513552 kB' 'Inactive: 2354388 kB' 'Active(anon): 9038200 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592780 kB' 'Mapped: 155400 kB' 'Shmem: 8505376 kB' 'KReclaimable: 248508 kB' 'Slab: 781016 kB' 'SReclaimable: 248508 kB' 'SUnreclaim: 532508 kB' 'KernelStack: 21888 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10440960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213476 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.970 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.971 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42798736 kB' 'MemAvailable: 45182128 kB' 'Buffers: 12536 kB' 'Cached: 11265492 kB' 'SwapCached: 16 kB' 'Active: 9512788 kB' 'Inactive: 2354388 kB' 'Active(anon): 9037436 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592460 kB' 'Mapped: 155256 kB' 'Shmem: 8505376 kB' 'KReclaimable: 248508 kB' 'Slab: 780996 kB' 'SReclaimable: 248508 kB' 'SUnreclaim: 532488 kB' 'KernelStack: 21856 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10440976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213444 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.972 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.973 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42799480 kB' 'MemAvailable: 45182872 kB' 'Buffers: 12536 kB' 'Cached: 11265512 kB' 'SwapCached: 16 kB' 'Active: 9512800 kB' 'Inactive: 2354388 kB' 'Active(anon): 9037448 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592456 kB' 'Mapped: 155256 kB' 'Shmem: 8505396 kB' 'KReclaimable: 248508 kB' 'Slab: 780996 kB' 'SReclaimable: 248508 kB' 'SUnreclaim: 532488 kB' 'KernelStack: 21856 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10441000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213444 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.974 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.975 nr_hugepages=1024 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.975 resv_hugepages=0 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.975 surplus_hugepages=0 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.975 anon_hugepages=0 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.975 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42799984 kB' 'MemAvailable: 45183376 kB' 'Buffers: 12536 kB' 'Cached: 11265512 kB' 'SwapCached: 16 kB' 'Active: 9512836 kB' 'Inactive: 2354388 kB' 'Active(anon): 9037484 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592488 kB' 'Mapped: 155256 kB' 'Shmem: 8505396 kB' 'KReclaimable: 248508 kB' 'Slab: 780996 kB' 'SReclaimable: 248508 kB' 'SUnreclaim: 532488 kB' 'KernelStack: 21872 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10441020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213444 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.976 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26985976 kB' 'MemUsed: 5606108 kB' 'SwapCached: 16 kB' 'Active: 2873920 kB' 'Inactive: 180704 kB' 'Active(anon): 2657300 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2787392 kB' 'Mapped: 96084 kB' 'AnonPages: 270404 kB' 'Shmem: 2390068 kB' 'KernelStack: 12296 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127352 kB' 'Slab: 370736 kB' 'SReclaimable: 127352 kB' 'SUnreclaim: 243384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.977 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.978 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 15813936 kB' 'MemUsed: 11889212 kB' 'SwapCached: 0 kB' 'Active: 6638816 kB' 'Inactive: 2173684 kB' 'Active(anon): 6380084 kB' 'Inactive(anon): 57072 kB' 'Active(file): 258732 kB' 'Inactive(file): 2116612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8490716 kB' 'Mapped: 59172 kB' 'AnonPages: 321916 kB' 'Shmem: 6115372 kB' 'KernelStack: 9544 kB' 'PageTables: 3364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121156 kB' 'Slab: 410260 kB' 'SReclaimable: 121156 kB' 'SUnreclaim: 289104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.979 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:10.980 node0=512 expecting 512 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:10.980 node1=512 expecting 512 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:10.980 00:04:10.980 real 0m3.307s 00:04:10.980 user 0m1.228s 00:04:10.980 sys 0m2.074s 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:10.980 19:51:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:10.980 ************************************ 00:04:10.980 END TEST even_2G_alloc 00:04:10.980 ************************************ 00:04:10.980 19:51:58 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:10.980 19:51:58 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:10.980 19:51:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:10.980 19:51:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.980 ************************************ 00:04:10.980 START TEST odd_alloc 00:04:10.980 ************************************ 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.980 19:51:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:14.276 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:14.276 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42851844 kB' 'MemAvailable: 45235220 kB' 'Buffers: 12536 kB' 'Cached: 11265656 kB' 'SwapCached: 16 kB' 'Active: 9515220 kB' 'Inactive: 2354388 kB' 'Active(anon): 9039868 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594176 kB' 'Mapped: 155400 kB' 'Shmem: 8505540 kB' 'KReclaimable: 248476 kB' 'Slab: 781192 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532716 kB' 'KernelStack: 21904 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 10441640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213428 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.276 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42854648 kB' 'MemAvailable: 45238024 kB' 'Buffers: 12536 kB' 'Cached: 11265656 kB' 'SwapCached: 16 kB' 'Active: 9514908 kB' 'Inactive: 2354388 kB' 'Active(anon): 9039556 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594380 kB' 'Mapped: 155272 kB' 'Shmem: 8505540 kB' 'KReclaimable: 248476 kB' 'Slab: 781192 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532716 kB' 'KernelStack: 21904 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 10441656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213396 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.277 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.278 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42855400 kB' 'MemAvailable: 45238776 kB' 'Buffers: 12536 kB' 'Cached: 11265692 kB' 'SwapCached: 16 kB' 'Active: 9514284 kB' 'Inactive: 2354388 kB' 'Active(anon): 9038932 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593664 kB' 'Mapped: 155272 kB' 'Shmem: 8505576 kB' 'KReclaimable: 248476 kB' 'Slab: 781192 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532716 kB' 'KernelStack: 21888 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 10441676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213396 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.279 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.280 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.281 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:14.543 nr_hugepages=1025 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.543 resv_hugepages=0 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.543 surplus_hugepages=0 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.543 anon_hugepages=0 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42857360 kB' 'MemAvailable: 45240736 kB' 'Buffers: 12536 kB' 'Cached: 11265696 kB' 'SwapCached: 16 kB' 'Active: 9514788 kB' 'Inactive: 2354388 kB' 'Active(anon): 9039436 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594196 kB' 'Mapped: 155272 kB' 'Shmem: 8505580 kB' 'KReclaimable: 248476 kB' 'Slab: 781192 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532716 kB' 'KernelStack: 21904 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486620 kB' 'Committed_AS: 10441332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213380 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.543 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.544 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26996432 kB' 'MemUsed: 5595652 kB' 'SwapCached: 16 kB' 'Active: 2874904 kB' 'Inactive: 180704 kB' 'Active(anon): 2658284 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2787504 kB' 'Mapped: 96100 kB' 'AnonPages: 271236 kB' 'Shmem: 2390180 kB' 'KernelStack: 12344 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127320 kB' 'Slab: 370596 kB' 'SReclaimable: 127320 kB' 'SUnreclaim: 243276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.545 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 15861004 kB' 'MemUsed: 11842144 kB' 'SwapCached: 0 kB' 'Active: 6639380 kB' 'Inactive: 2173684 kB' 'Active(anon): 6380648 kB' 'Inactive(anon): 57072 kB' 'Active(file): 258732 kB' 'Inactive(file): 2116612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8490760 kB' 'Mapped: 59172 kB' 'AnonPages: 322364 kB' 'Shmem: 6115416 kB' 'KernelStack: 9496 kB' 'PageTables: 3200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121156 kB' 'Slab: 410596 kB' 'SReclaimable: 121156 kB' 'SUnreclaim: 289440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.546 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 19:52:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:14.547 node0=512 expecting 513 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:14.547 node1=513 expecting 512 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:14.547 00:04:14.547 real 0m3.467s 00:04:14.547 user 0m1.371s 00:04:14.547 sys 0m2.158s 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:14.547 19:52:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.547 ************************************ 00:04:14.547 END TEST odd_alloc 00:04:14.547 ************************************ 00:04:14.547 19:52:02 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:14.547 19:52:02 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:14.547 19:52:02 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:14.547 19:52:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.547 ************************************ 00:04:14.547 START TEST custom_alloc 00:04:14.547 ************************************ 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:14.547 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.548 19:52:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:17.834 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:17.834 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.834 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 41812736 kB' 'MemAvailable: 44196112 kB' 'Buffers: 12536 kB' 'Cached: 11265816 kB' 'SwapCached: 16 kB' 'Active: 9516500 kB' 'Inactive: 2354388 kB' 'Active(anon): 9041148 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595352 kB' 'Mapped: 155412 kB' 'Shmem: 8505700 kB' 'KReclaimable: 248476 kB' 'Slab: 781024 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532548 kB' 'KernelStack: 21920 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 10455020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213460 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.835 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 41814688 kB' 'MemAvailable: 44198064 kB' 'Buffers: 12536 kB' 'Cached: 11265820 kB' 'SwapCached: 16 kB' 'Active: 9515420 kB' 'Inactive: 2354388 kB' 'Active(anon): 9040068 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594716 kB' 'Mapped: 155292 kB' 'Shmem: 8505704 kB' 'KReclaimable: 248476 kB' 'Slab: 780984 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532508 kB' 'KernelStack: 21824 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 10443596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213380 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.836 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.837 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 41815296 kB' 'MemAvailable: 44198672 kB' 'Buffers: 12536 kB' 'Cached: 11265840 kB' 'SwapCached: 16 kB' 'Active: 9514856 kB' 'Inactive: 2354388 kB' 'Active(anon): 9039504 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594060 kB' 'Mapped: 155292 kB' 'Shmem: 8505724 kB' 'KReclaimable: 248476 kB' 'Slab: 780996 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532520 kB' 'KernelStack: 21872 kB' 'PageTables: 7432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 10443380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213412 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.838 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.839 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:17.840 nr_hugepages=1536 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.840 resv_hugepages=0 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.840 surplus_hugepages=0 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.840 anon_hugepages=0 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 41814112 kB' 'MemAvailable: 44197488 kB' 'Buffers: 12536 kB' 'Cached: 11265860 kB' 'SwapCached: 16 kB' 'Active: 9515304 kB' 'Inactive: 2354388 kB' 'Active(anon): 9039952 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594512 kB' 'Mapped: 155292 kB' 'Shmem: 8505744 kB' 'KReclaimable: 248476 kB' 'Slab: 780996 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532520 kB' 'KernelStack: 21888 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963356 kB' 'Committed_AS: 10443412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213524 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.840 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.841 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 26996424 kB' 'MemUsed: 5595660 kB' 'SwapCached: 16 kB' 'Active: 2874296 kB' 'Inactive: 180704 kB' 'Active(anon): 2657676 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2787636 kB' 'Mapped: 96120 kB' 'AnonPages: 270484 kB' 'Shmem: 2390312 kB' 'KernelStack: 12296 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127320 kB' 'Slab: 370392 kB' 'SReclaimable: 127320 kB' 'SUnreclaim: 243072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.842 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703148 kB' 'MemFree: 14816680 kB' 'MemUsed: 12886468 kB' 'SwapCached: 0 kB' 'Active: 6641144 kB' 'Inactive: 2173684 kB' 'Active(anon): 6382412 kB' 'Inactive(anon): 57072 kB' 'Active(file): 258732 kB' 'Inactive(file): 2116612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8490808 kB' 'Mapped: 59172 kB' 'AnonPages: 324112 kB' 'Shmem: 6115464 kB' 'KernelStack: 9656 kB' 'PageTables: 3536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121156 kB' 'Slab: 410604 kB' 'SReclaimable: 121156 kB' 'SUnreclaim: 289448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:17.843 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.102 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.103 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.103 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.103 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.103 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:18.103 node0=512 expecting 512 00:04:18.103 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.103 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.103 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.103 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:18.103 node1=1024 expecting 1024 00:04:18.103 19:52:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:18.103 00:04:18.103 real 0m3.424s 00:04:18.103 user 0m1.290s 00:04:18.103 sys 0m2.196s 00:04:18.103 19:52:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:18.103 19:52:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.103 ************************************ 00:04:18.103 END TEST custom_alloc 00:04:18.103 ************************************ 00:04:18.103 19:52:05 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:18.103 19:52:05 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.103 19:52:05 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.103 19:52:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.103 ************************************ 00:04:18.103 START TEST no_shrink_alloc 00:04:18.103 ************************************ 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.103 19:52:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:21.392 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:21.392 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42899724 kB' 'MemAvailable: 45283100 kB' 'Buffers: 12536 kB' 'Cached: 11265972 kB' 'SwapCached: 16 kB' 'Active: 9516824 kB' 'Inactive: 2354388 kB' 'Active(anon): 9041472 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595324 kB' 'Mapped: 155432 kB' 'Shmem: 8505856 kB' 'KReclaimable: 248476 kB' 'Slab: 781132 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532656 kB' 'KernelStack: 21984 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10445512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213604 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.392 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42901244 kB' 'MemAvailable: 45284620 kB' 'Buffers: 12536 kB' 'Cached: 11265972 kB' 'SwapCached: 16 kB' 'Active: 9516088 kB' 'Inactive: 2354388 kB' 'Active(anon): 9040736 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595212 kB' 'Mapped: 155312 kB' 'Shmem: 8505856 kB' 'KReclaimable: 248476 kB' 'Slab: 781132 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532656 kB' 'KernelStack: 21936 kB' 'PageTables: 7620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10445532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213556 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.393 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42899960 kB' 'MemAvailable: 45283336 kB' 'Buffers: 12536 kB' 'Cached: 11266004 kB' 'SwapCached: 16 kB' 'Active: 9516660 kB' 'Inactive: 2354388 kB' 'Active(anon): 9041308 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595700 kB' 'Mapped: 155296 kB' 'Shmem: 8505888 kB' 'KReclaimable: 248476 kB' 'Slab: 781196 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532720 kB' 'KernelStack: 21984 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10445724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213572 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.394 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.395 nr_hugepages=1024 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.395 resv_hugepages=0 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.395 surplus_hugepages=0 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.395 anon_hugepages=0 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42899924 kB' 'MemAvailable: 45283300 kB' 'Buffers: 12536 kB' 'Cached: 11266028 kB' 'SwapCached: 16 kB' 'Active: 9516616 kB' 'Inactive: 2354388 kB' 'Active(anon): 9041264 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595620 kB' 'Mapped: 155304 kB' 'Shmem: 8505912 kB' 'KReclaimable: 248476 kB' 'Slab: 781196 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532720 kB' 'KernelStack: 22048 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10445948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213620 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.395 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.396 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 25964684 kB' 'MemUsed: 6627400 kB' 'SwapCached: 16 kB' 'Active: 2875068 kB' 'Inactive: 180704 kB' 'Active(anon): 2658448 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2787756 kB' 'Mapped: 96132 kB' 'AnonPages: 271152 kB' 'Shmem: 2390432 kB' 'KernelStack: 12360 kB' 'PageTables: 4588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127320 kB' 'Slab: 370428 kB' 'SReclaimable: 127320 kB' 'SUnreclaim: 243108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.397 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:21.398 node0=1024 expecting 1024 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.398 19:52:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:24.692 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:24.692 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:24.692 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:24.692 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:24.693 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:24.693 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:24.693 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:24.693 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:24.693 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:24.693 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:24.693 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:24.693 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:24.693 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:24.693 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:24.693 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:24.693 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:24.693 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:24.693 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42919792 kB' 'MemAvailable: 45303168 kB' 'Buffers: 12536 kB' 'Cached: 11266124 kB' 'SwapCached: 16 kB' 'Active: 9515924 kB' 'Inactive: 2354388 kB' 'Active(anon): 9040572 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594400 kB' 'Mapped: 155408 kB' 'Shmem: 8506008 kB' 'KReclaimable: 248476 kB' 'Slab: 780976 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532500 kB' 'KernelStack: 22224 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10446608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213652 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.693 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42919720 kB' 'MemAvailable: 45303096 kB' 'Buffers: 12536 kB' 'Cached: 11266128 kB' 'SwapCached: 16 kB' 'Active: 9514296 kB' 'Inactive: 2354388 kB' 'Active(anon): 9038944 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593172 kB' 'Mapped: 155308 kB' 'Shmem: 8506012 kB' 'KReclaimable: 248476 kB' 'Slab: 781252 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532776 kB' 'KernelStack: 22016 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10445136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213636 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.694 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.695 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42918728 kB' 'MemAvailable: 45302104 kB' 'Buffers: 12536 kB' 'Cached: 11266144 kB' 'SwapCached: 16 kB' 'Active: 9514908 kB' 'Inactive: 2354388 kB' 'Active(anon): 9039556 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593796 kB' 'Mapped: 155308 kB' 'Shmem: 8506028 kB' 'KReclaimable: 248476 kB' 'Slab: 781252 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532776 kB' 'KernelStack: 22048 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10446648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213636 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.696 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.697 nr_hugepages=1024 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.697 resv_hugepages=0 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.697 surplus_hugepages=0 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.697 anon_hugepages=0 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295232 kB' 'MemFree: 42918600 kB' 'MemAvailable: 45301976 kB' 'Buffers: 12536 kB' 'Cached: 11266168 kB' 'SwapCached: 16 kB' 'Active: 9514384 kB' 'Inactive: 2354388 kB' 'Active(anon): 9039032 kB' 'Inactive(anon): 57088 kB' 'Active(file): 475352 kB' 'Inactive(file): 2297300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8387836 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593236 kB' 'Mapped: 155308 kB' 'Shmem: 8506052 kB' 'KReclaimable: 248476 kB' 'Slab: 781252 kB' 'SReclaimable: 248476 kB' 'SUnreclaim: 532776 kB' 'KernelStack: 21984 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487644 kB' 'Committed_AS: 10446672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213636 kB' 'VmallocChunk: 0 kB' 'Percpu: 74816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 484724 kB' 'DirectMap2M: 8638464 kB' 'DirectMap1G: 59768832 kB' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.697 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.698 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 25967708 kB' 'MemUsed: 6624376 kB' 'SwapCached: 16 kB' 'Active: 2875328 kB' 'Inactive: 180704 kB' 'Active(anon): 2658708 kB' 'Inactive(anon): 16 kB' 'Active(file): 216620 kB' 'Inactive(file): 180688 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2787876 kB' 'Mapped: 96136 kB' 'AnonPages: 271324 kB' 'Shmem: 2390552 kB' 'KernelStack: 12344 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 127320 kB' 'Slab: 370568 kB' 'SReclaimable: 127320 kB' 'SUnreclaim: 243248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.699 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.700 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:24.701 node0=1024 expecting 1024 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:24.701 00:04:24.701 real 0m6.405s 00:04:24.701 user 0m2.317s 00:04:24.701 sys 0m4.154s 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.701 19:52:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:24.701 ************************************ 00:04:24.701 END TEST no_shrink_alloc 00:04:24.701 ************************************ 00:04:24.701 19:52:12 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:24.701 19:52:12 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:24.701 19:52:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:24.701 19:52:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.701 19:52:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.701 19:52:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.701 19:52:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.701 19:52:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:24.701 19:52:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.701 19:52:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.701 19:52:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.701 19:52:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.701 19:52:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:24.701 19:52:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:24.701 00:04:24.701 real 0m25.261s 00:04:24.701 user 0m8.794s 00:04:24.701 sys 0m15.160s 00:04:24.701 19:52:12 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.701 19:52:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.701 ************************************ 00:04:24.701 END TEST hugepages 00:04:24.701 ************************************ 00:04:24.701 19:52:12 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:24.701 19:52:12 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.701 19:52:12 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.701 19:52:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.701 ************************************ 00:04:24.701 START TEST driver 00:04:24.701 ************************************ 00:04:24.701 19:52:12 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:24.701 * Looking for test storage... 00:04:24.701 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:24.701 19:52:12 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:24.701 19:52:12 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.701 19:52:12 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.972 19:52:16 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:29.972 19:52:16 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.972 19:52:16 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.972 19:52:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:29.972 ************************************ 00:04:29.972 START TEST guess_driver 00:04:29.972 ************************************ 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:29.972 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:29.972 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:29.972 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:29.972 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:29.972 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:29.972 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:29.972 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:29.972 Looking for driver=vfio-pci 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.972 19:52:16 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.508 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.767 19:52:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:34.674 19:52:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:34.674 19:52:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:34.674 19:52:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:34.674 19:52:21 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:34.675 19:52:21 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:34.675 19:52:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.675 19:52:21 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.866 00:04:38.866 real 0m9.637s 00:04:38.866 user 0m2.513s 00:04:38.866 sys 0m4.830s 00:04:38.866 19:52:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:38.866 19:52:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.866 ************************************ 00:04:38.866 END TEST guess_driver 00:04:38.866 ************************************ 00:04:39.125 00:04:39.125 real 0m14.426s 00:04:39.125 user 0m3.899s 00:04:39.125 sys 0m7.470s 00:04:39.125 19:52:26 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.125 19:52:26 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:39.125 ************************************ 00:04:39.125 END TEST driver 00:04:39.125 ************************************ 00:04:39.125 19:52:26 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:39.125 19:52:26 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.125 19:52:26 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.125 19:52:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.125 ************************************ 00:04:39.125 START TEST devices 00:04:39.125 ************************************ 00:04:39.125 19:52:26 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:39.125 * Looking for test storage... 00:04:39.125 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:39.125 19:52:26 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:39.125 19:52:26 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:39.125 19:52:26 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.125 19:52:26 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:43.406 19:52:30 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:43.406 19:52:30 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:43.406 19:52:30 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:43.406 19:52:30 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:43.406 19:52:30 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:43.406 19:52:30 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:43.406 19:52:30 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:43.406 19:52:30 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:43.406 19:52:30 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:43.406 19:52:30 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:43.406 No valid GPT data, bailing 00:04:43.406 19:52:30 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:43.406 19:52:30 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:43.406 19:52:30 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:43.406 19:52:30 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:43.406 19:52:30 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:43.406 19:52:30 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:43.406 19:52:30 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:43.406 19:52:30 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.406 19:52:30 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.406 19:52:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:43.406 ************************************ 00:04:43.406 START TEST nvme_mount 00:04:43.406 ************************************ 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:43.406 19:52:30 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:44.340 Creating new GPT entries in memory. 00:04:44.340 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:44.340 other utilities. 00:04:44.340 19:52:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:44.340 19:52:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.340 19:52:31 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.340 19:52:31 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.340 19:52:31 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:45.275 Creating new GPT entries in memory. 00:04:45.275 The operation has completed successfully. 00:04:45.275 19:52:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.275 19:52:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3637750 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.276 19:52:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:48.559 19:52:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.559 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.559 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:48.559 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.559 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.559 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:48.559 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:48.559 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.559 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.559 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.559 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:48.559 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:48.559 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.559 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:48.818 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:48.818 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:48.818 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:48.818 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:48.818 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:48.818 19:52:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:48.818 19:52:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.818 19:52:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:48.818 19:52:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:49.077 19:52:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.077 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.077 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:49.077 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:49.077 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.077 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.077 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.077 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.077 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:49.077 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.077 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.077 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:49.077 19:52:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.077 19:52:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.077 19:52:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:52.361 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.361 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.361 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.361 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.362 19:52:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:55.655 19:52:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.655 19:52:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.655 19:52:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:55.655 19:52:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:55.655 19:52:43 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:55.655 19:52:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.655 19:52:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.655 19:52:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.655 19:52:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.655 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.655 00:04:55.655 real 0m12.503s 00:04:55.655 user 0m3.614s 00:04:55.655 sys 0m6.812s 00:04:55.655 19:52:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:55.655 19:52:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:55.655 ************************************ 00:04:55.655 END TEST nvme_mount 00:04:55.655 ************************************ 00:04:55.655 19:52:43 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:55.655 19:52:43 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:55.655 19:52:43 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:55.655 19:52:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:55.655 ************************************ 00:04:55.655 START TEST dm_mount 00:04:55.655 ************************************ 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:55.655 19:52:43 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:56.592 Creating new GPT entries in memory. 00:04:56.592 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:56.592 other utilities. 00:04:56.592 19:52:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:56.592 19:52:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.592 19:52:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:56.592 19:52:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:56.592 19:52:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:57.972 Creating new GPT entries in memory. 00:04:57.972 The operation has completed successfully. 00:04:57.972 19:52:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:57.972 19:52:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.972 19:52:45 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.972 19:52:45 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.972 19:52:45 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:58.908 The operation has completed successfully. 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3642179 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:58.908 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:58.909 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:58.909 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.909 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.909 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:58.909 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:58.909 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.909 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.909 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:58.909 19:52:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.909 19:52:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.909 19:52:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:01.439 19:52:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.439 19:52:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:04.727 19:52:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.727 19:52:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.727 19:52:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:04.727 19:52:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:04.727 19:52:52 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:04.727 19:52:52 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:04.727 19:52:52 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.727 19:52:52 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:04.727 19:52:52 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.727 19:52:52 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:04.727 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:04.727 19:52:52 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.727 19:52:52 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:04.727 00:05:04.727 real 0m8.987s 00:05:04.727 user 0m2.011s 00:05:04.727 sys 0m3.957s 00:05:04.727 19:52:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.727 19:52:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:04.727 ************************************ 00:05:04.727 END TEST dm_mount 00:05:04.727 ************************************ 00:05:04.727 19:52:52 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:04.727 19:52:52 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:04.727 19:52:52 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.727 19:52:52 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.727 19:52:52 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:04.727 19:52:52 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.728 19:52:52 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:04.987 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:04.987 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:05:04.987 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:04.987 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:04.987 19:52:52 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:04.987 19:52:52 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:04.987 19:52:52 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.987 19:52:52 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.987 19:52:52 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.987 19:52:52 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.987 19:52:52 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:04.987 00:05:04.987 real 0m25.918s 00:05:04.987 user 0m7.085s 00:05:04.987 sys 0m13.651s 00:05:04.987 19:52:52 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.987 19:52:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:04.987 ************************************ 00:05:04.987 END TEST devices 00:05:04.987 ************************************ 00:05:04.987 00:05:04.987 real 1m29.717s 00:05:04.987 user 0m27.429s 00:05:04.987 sys 0m50.827s 00:05:04.987 19:52:52 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.987 19:52:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:04.987 ************************************ 00:05:04.987 END TEST setup.sh 00:05:04.987 ************************************ 00:05:04.987 19:52:52 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:08.277 Hugepages 00:05:08.277 node hugesize free / total 00:05:08.277 node0 1048576kB 0 / 0 00:05:08.277 node0 2048kB 2048 / 2048 00:05:08.277 node1 1048576kB 0 / 0 00:05:08.277 node1 2048kB 0 / 0 00:05:08.277 00:05:08.277 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:08.277 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:08.277 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:08.277 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:08.277 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:08.277 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:08.277 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:08.277 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:08.277 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:08.277 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:08.277 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:08.277 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:08.277 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:08.277 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:08.277 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:08.277 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:08.277 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:08.277 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:08.277 19:52:55 -- spdk/autotest.sh@130 -- # uname -s 00:05:08.277 19:52:55 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:08.277 19:52:55 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:08.277 19:52:55 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:11.570 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:11.570 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:11.570 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:11.570 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:11.570 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:11.570 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:11.570 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:11.570 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:11.570 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:11.570 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:11.571 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:11.571 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:11.571 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:11.571 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:11.571 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:11.571 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:12.951 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:12.951 19:53:00 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:14.328 19:53:01 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:14.328 19:53:01 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:14.328 19:53:01 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:14.328 19:53:01 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:14.328 19:53:01 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:14.328 19:53:01 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:14.328 19:53:01 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:14.328 19:53:01 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:14.328 19:53:01 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:14.328 19:53:01 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:14.328 19:53:01 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:05:14.328 19:53:01 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:17.613 Waiting for block devices as requested 00:05:17.613 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:17.613 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:17.613 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:17.613 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:17.873 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:17.873 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:17.873 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:18.134 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:18.134 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:18.134 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:18.430 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:18.431 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:18.431 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:18.431 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:18.710 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:18.710 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:18.710 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:18.970 19:53:06 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:18.970 19:53:06 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:18.970 19:53:06 -- common/autotest_common.sh@1498 -- # grep 0000:d8:00.0/nvme/nvme 00:05:18.970 19:53:06 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:18.970 19:53:06 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:18.970 19:53:06 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:18.970 19:53:06 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:18.970 19:53:06 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:18.970 19:53:06 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:18.970 19:53:06 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:18.970 19:53:06 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:18.970 19:53:06 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:18.970 19:53:06 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:18.970 19:53:06 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:05:18.970 19:53:06 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:18.970 19:53:06 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:18.970 19:53:06 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:18.970 19:53:06 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:18.970 19:53:06 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:18.970 19:53:06 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:18.970 19:53:06 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:18.970 19:53:06 -- common/autotest_common.sh@1553 -- # continue 00:05:18.970 19:53:06 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:18.970 19:53:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:18.970 19:53:06 -- common/autotest_common.sh@10 -- # set +x 00:05:18.970 19:53:06 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:18.970 19:53:06 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:18.970 19:53:06 -- common/autotest_common.sh@10 -- # set +x 00:05:18.970 19:53:06 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:22.253 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.253 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.253 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.254 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.254 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.254 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.254 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.254 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:22.254 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.254 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.254 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.254 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.254 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.254 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.254 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.254 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:24.155 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:24.155 19:53:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:24.155 19:53:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.155 19:53:11 -- common/autotest_common.sh@10 -- # set +x 00:05:24.155 19:53:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:24.155 19:53:11 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:24.155 19:53:11 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:24.155 19:53:11 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:24.155 19:53:11 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:24.155 19:53:11 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:24.155 19:53:11 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:24.155 19:53:11 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:24.155 19:53:11 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.155 19:53:11 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:24.155 19:53:11 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:24.155 19:53:11 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:24.155 19:53:11 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:05:24.156 19:53:11 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:24.156 19:53:11 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:24.156 19:53:11 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:24.156 19:53:11 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:24.156 19:53:11 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:24.156 19:53:11 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:d8:00.0 00:05:24.156 19:53:11 -- common/autotest_common.sh@1588 -- # [[ -z 0000:d8:00.0 ]] 00:05:24.156 19:53:11 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3651943 00:05:24.156 19:53:11 -- common/autotest_common.sh@1594 -- # waitforlisten 3651943 00:05:24.156 19:53:11 -- common/autotest_common.sh@827 -- # '[' -z 3651943 ']' 00:05:24.156 19:53:11 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.156 19:53:11 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:24.156 19:53:11 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.156 19:53:11 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:24.156 19:53:11 -- common/autotest_common.sh@10 -- # set +x 00:05:24.156 19:53:11 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.156 [2024-07-13 19:53:11.747802] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:24.156 [2024-07-13 19:53:11.747877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651943 ] 00:05:24.156 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.156 [2024-07-13 19:53:11.815265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.416 [2024-07-13 19:53:11.856726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.416 19:53:12 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:24.416 19:53:12 -- common/autotest_common.sh@860 -- # return 0 00:05:24.416 19:53:12 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:24.416 19:53:12 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:24.416 19:53:12 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:27.700 nvme0n1 00:05:27.700 19:53:15 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:27.700 [2024-07-13 19:53:15.203134] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:27.700 request: 00:05:27.700 { 00:05:27.700 "nvme_ctrlr_name": "nvme0", 00:05:27.700 "password": "test", 00:05:27.700 "method": "bdev_nvme_opal_revert", 00:05:27.700 "req_id": 1 00:05:27.700 } 00:05:27.700 Got JSON-RPC error response 00:05:27.700 response: 00:05:27.700 { 00:05:27.700 "code": -32602, 00:05:27.700 "message": "Invalid parameters" 00:05:27.700 } 00:05:27.700 19:53:15 -- common/autotest_common.sh@1600 -- # true 00:05:27.700 19:53:15 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:27.700 19:53:15 -- common/autotest_common.sh@1604 -- # killprocess 3651943 00:05:27.700 19:53:15 -- common/autotest_common.sh@946 -- # '[' -z 3651943 ']' 00:05:27.700 19:53:15 -- common/autotest_common.sh@950 -- # kill -0 3651943 00:05:27.700 19:53:15 -- common/autotest_common.sh@951 -- # uname 00:05:27.700 19:53:15 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:27.700 19:53:15 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3651943 00:05:27.700 19:53:15 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:27.700 19:53:15 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:27.700 19:53:15 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3651943' 00:05:27.700 killing process with pid 3651943 00:05:27.700 19:53:15 -- common/autotest_common.sh@965 -- # kill 3651943 00:05:27.700 19:53:15 -- common/autotest_common.sh@970 -- # wait 3651943 00:05:27.700 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.700 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.700 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.700 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.700 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.700 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.700 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.700 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.700 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.700 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.700 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.700 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.700 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.700 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.701 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.960 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:27.961 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:29.864 19:53:17 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:29.864 19:53:17 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:29.864 19:53:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:29.864 19:53:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:29.864 19:53:17 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:29.864 19:53:17 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:29.864 19:53:17 -- common/autotest_common.sh@10 -- # set +x 00:05:29.864 19:53:17 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:29.864 19:53:17 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:29.864 19:53:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.864 19:53:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.864 19:53:17 -- common/autotest_common.sh@10 -- # set +x 00:05:29.864 ************************************ 00:05:29.864 START TEST env 00:05:29.864 ************************************ 00:05:29.864 19:53:17 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:30.123 * Looking for test storage... 00:05:30.123 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:30.123 19:53:17 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:30.123 19:53:17 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.123 19:53:17 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.123 19:53:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.123 ************************************ 00:05:30.123 START TEST env_memory 00:05:30.123 ************************************ 00:05:30.123 19:53:17 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:30.123 00:05:30.123 00:05:30.123 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.123 http://cunit.sourceforge.net/ 00:05:30.123 00:05:30.123 00:05:30.123 Suite: memory 00:05:30.123 Test: alloc and free memory map ...[2024-07-13 19:53:17.647299] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:30.123 passed 00:05:30.123 Test: mem map translation ...[2024-07-13 19:53:17.660469] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:30.123 [2024-07-13 19:53:17.660488] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:30.123 [2024-07-13 19:53:17.660521] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:30.123 [2024-07-13 19:53:17.660531] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:30.123 passed 00:05:30.123 Test: mem map registration ...[2024-07-13 19:53:17.682588] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:30.123 [2024-07-13 19:53:17.682607] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:30.123 passed 00:05:30.123 Test: mem map adjacent registrations ...passed 00:05:30.123 00:05:30.123 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.123 suites 1 1 n/a 0 0 00:05:30.123 tests 4 4 4 0 0 00:05:30.123 asserts 152 152 152 0 n/a 00:05:30.123 00:05:30.123 Elapsed time = 0.090 seconds 00:05:30.123 00:05:30.123 real 0m0.103s 00:05:30.123 user 0m0.091s 00:05:30.123 sys 0m0.012s 00:05:30.123 19:53:17 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.123 19:53:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:30.123 ************************************ 00:05:30.123 END TEST env_memory 00:05:30.123 ************************************ 00:05:30.123 19:53:17 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:30.123 19:53:17 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.123 19:53:17 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.123 19:53:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.383 ************************************ 00:05:30.383 START TEST env_vtophys 00:05:30.383 ************************************ 00:05:30.383 19:53:17 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:30.383 EAL: lib.eal log level changed from notice to debug 00:05:30.383 EAL: Detected lcore 0 as core 0 on socket 0 00:05:30.383 EAL: Detected lcore 1 as core 1 on socket 0 00:05:30.383 EAL: Detected lcore 2 as core 2 on socket 0 00:05:30.383 EAL: Detected lcore 3 as core 3 on socket 0 00:05:30.383 EAL: Detected lcore 4 as core 4 on socket 0 00:05:30.383 EAL: Detected lcore 5 as core 5 on socket 0 00:05:30.383 EAL: Detected lcore 6 as core 6 on socket 0 00:05:30.383 EAL: Detected lcore 7 as core 8 on socket 0 00:05:30.383 EAL: Detected lcore 8 as core 9 on socket 0 00:05:30.383 EAL: Detected lcore 9 as core 10 on socket 0 00:05:30.383 EAL: Detected lcore 10 as core 11 on socket 0 00:05:30.383 EAL: Detected lcore 11 as core 12 on socket 0 00:05:30.383 EAL: Detected lcore 12 as core 13 on socket 0 00:05:30.383 EAL: Detected lcore 13 as core 14 on socket 0 00:05:30.383 EAL: Detected lcore 14 as core 16 on socket 0 00:05:30.383 EAL: Detected lcore 15 as core 17 on socket 0 00:05:30.383 EAL: Detected lcore 16 as core 18 on socket 0 00:05:30.383 EAL: Detected lcore 17 as core 19 on socket 0 00:05:30.383 EAL: Detected lcore 18 as core 20 on socket 0 00:05:30.383 EAL: Detected lcore 19 as core 21 on socket 0 00:05:30.383 EAL: Detected lcore 20 as core 22 on socket 0 00:05:30.383 EAL: Detected lcore 21 as core 24 on socket 0 00:05:30.383 EAL: Detected lcore 22 as core 25 on socket 0 00:05:30.383 EAL: Detected lcore 23 as core 26 on socket 0 00:05:30.383 EAL: Detected lcore 24 as core 27 on socket 0 00:05:30.383 EAL: Detected lcore 25 as core 28 on socket 0 00:05:30.383 EAL: Detected lcore 26 as core 29 on socket 0 00:05:30.383 EAL: Detected lcore 27 as core 30 on socket 0 00:05:30.383 EAL: Detected lcore 28 as core 0 on socket 1 00:05:30.383 EAL: Detected lcore 29 as core 1 on socket 1 00:05:30.383 EAL: Detected lcore 30 as core 2 on socket 1 00:05:30.383 EAL: Detected lcore 31 as core 3 on socket 1 00:05:30.383 EAL: Detected lcore 32 as core 4 on socket 1 00:05:30.383 EAL: Detected lcore 33 as core 5 on socket 1 00:05:30.383 EAL: Detected lcore 34 as core 6 on socket 1 00:05:30.383 EAL: Detected lcore 35 as core 8 on socket 1 00:05:30.383 EAL: Detected lcore 36 as core 9 on socket 1 00:05:30.383 EAL: Detected lcore 37 as core 10 on socket 1 00:05:30.383 EAL: Detected lcore 38 as core 11 on socket 1 00:05:30.383 EAL: Detected lcore 39 as core 12 on socket 1 00:05:30.383 EAL: Detected lcore 40 as core 13 on socket 1 00:05:30.383 EAL: Detected lcore 41 as core 14 on socket 1 00:05:30.383 EAL: Detected lcore 42 as core 16 on socket 1 00:05:30.383 EAL: Detected lcore 43 as core 17 on socket 1 00:05:30.383 EAL: Detected lcore 44 as core 18 on socket 1 00:05:30.383 EAL: Detected lcore 45 as core 19 on socket 1 00:05:30.383 EAL: Detected lcore 46 as core 20 on socket 1 00:05:30.383 EAL: Detected lcore 47 as core 21 on socket 1 00:05:30.383 EAL: Detected lcore 48 as core 22 on socket 1 00:05:30.383 EAL: Detected lcore 49 as core 24 on socket 1 00:05:30.383 EAL: Detected lcore 50 as core 25 on socket 1 00:05:30.383 EAL: Detected lcore 51 as core 26 on socket 1 00:05:30.383 EAL: Detected lcore 52 as core 27 on socket 1 00:05:30.383 EAL: Detected lcore 53 as core 28 on socket 1 00:05:30.383 EAL: Detected lcore 54 as core 29 on socket 1 00:05:30.383 EAL: Detected lcore 55 as core 30 on socket 1 00:05:30.383 EAL: Detected lcore 56 as core 0 on socket 0 00:05:30.383 EAL: Detected lcore 57 as core 1 on socket 0 00:05:30.383 EAL: Detected lcore 58 as core 2 on socket 0 00:05:30.383 EAL: Detected lcore 59 as core 3 on socket 0 00:05:30.383 EAL: Detected lcore 60 as core 4 on socket 0 00:05:30.383 EAL: Detected lcore 61 as core 5 on socket 0 00:05:30.383 EAL: Detected lcore 62 as core 6 on socket 0 00:05:30.383 EAL: Detected lcore 63 as core 8 on socket 0 00:05:30.383 EAL: Detected lcore 64 as core 9 on socket 0 00:05:30.383 EAL: Detected lcore 65 as core 10 on socket 0 00:05:30.383 EAL: Detected lcore 66 as core 11 on socket 0 00:05:30.383 EAL: Detected lcore 67 as core 12 on socket 0 00:05:30.383 EAL: Detected lcore 68 as core 13 on socket 0 00:05:30.383 EAL: Detected lcore 69 as core 14 on socket 0 00:05:30.383 EAL: Detected lcore 70 as core 16 on socket 0 00:05:30.383 EAL: Detected lcore 71 as core 17 on socket 0 00:05:30.383 EAL: Detected lcore 72 as core 18 on socket 0 00:05:30.383 EAL: Detected lcore 73 as core 19 on socket 0 00:05:30.383 EAL: Detected lcore 74 as core 20 on socket 0 00:05:30.384 EAL: Detected lcore 75 as core 21 on socket 0 00:05:30.384 EAL: Detected lcore 76 as core 22 on socket 0 00:05:30.384 EAL: Detected lcore 77 as core 24 on socket 0 00:05:30.384 EAL: Detected lcore 78 as core 25 on socket 0 00:05:30.384 EAL: Detected lcore 79 as core 26 on socket 0 00:05:30.384 EAL: Detected lcore 80 as core 27 on socket 0 00:05:30.384 EAL: Detected lcore 81 as core 28 on socket 0 00:05:30.384 EAL: Detected lcore 82 as core 29 on socket 0 00:05:30.384 EAL: Detected lcore 83 as core 30 on socket 0 00:05:30.384 EAL: Detected lcore 84 as core 0 on socket 1 00:05:30.384 EAL: Detected lcore 85 as core 1 on socket 1 00:05:30.384 EAL: Detected lcore 86 as core 2 on socket 1 00:05:30.384 EAL: Detected lcore 87 as core 3 on socket 1 00:05:30.384 EAL: Detected lcore 88 as core 4 on socket 1 00:05:30.384 EAL: Detected lcore 89 as core 5 on socket 1 00:05:30.384 EAL: Detected lcore 90 as core 6 on socket 1 00:05:30.384 EAL: Detected lcore 91 as core 8 on socket 1 00:05:30.384 EAL: Detected lcore 92 as core 9 on socket 1 00:05:30.384 EAL: Detected lcore 93 as core 10 on socket 1 00:05:30.384 EAL: Detected lcore 94 as core 11 on socket 1 00:05:30.384 EAL: Detected lcore 95 as core 12 on socket 1 00:05:30.384 EAL: Detected lcore 96 as core 13 on socket 1 00:05:30.384 EAL: Detected lcore 97 as core 14 on socket 1 00:05:30.384 EAL: Detected lcore 98 as core 16 on socket 1 00:05:30.384 EAL: Detected lcore 99 as core 17 on socket 1 00:05:30.384 EAL: Detected lcore 100 as core 18 on socket 1 00:05:30.384 EAL: Detected lcore 101 as core 19 on socket 1 00:05:30.384 EAL: Detected lcore 102 as core 20 on socket 1 00:05:30.384 EAL: Detected lcore 103 as core 21 on socket 1 00:05:30.384 EAL: Detected lcore 104 as core 22 on socket 1 00:05:30.384 EAL: Detected lcore 105 as core 24 on socket 1 00:05:30.384 EAL: Detected lcore 106 as core 25 on socket 1 00:05:30.384 EAL: Detected lcore 107 as core 26 on socket 1 00:05:30.384 EAL: Detected lcore 108 as core 27 on socket 1 00:05:30.384 EAL: Detected lcore 109 as core 28 on socket 1 00:05:30.384 EAL: Detected lcore 110 as core 29 on socket 1 00:05:30.384 EAL: Detected lcore 111 as core 30 on socket 1 00:05:30.384 EAL: Maximum logical cores by configuration: 128 00:05:30.384 EAL: Detected CPU lcores: 112 00:05:30.384 EAL: Detected NUMA nodes: 2 00:05:30.384 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:30.384 EAL: Checking presence of .so 'librte_eal.so.23' 00:05:30.384 EAL: Checking presence of .so 'librte_eal.so' 00:05:30.384 EAL: Detected static linkage of DPDK 00:05:30.384 EAL: No shared files mode enabled, IPC will be disabled 00:05:30.384 EAL: Bus pci wants IOVA as 'DC' 00:05:30.384 EAL: Buses did not request a specific IOVA mode. 00:05:30.384 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:30.384 EAL: Selected IOVA mode 'VA' 00:05:30.384 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.384 EAL: Probing VFIO support... 00:05:30.384 EAL: IOMMU type 1 (Type 1) is supported 00:05:30.384 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:30.384 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:30.384 EAL: VFIO support initialized 00:05:30.384 EAL: Ask a virtual area of 0x2e000 bytes 00:05:30.384 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:30.384 EAL: Setting up physically contiguous memory... 00:05:30.384 EAL: Setting maximum number of open files to 524288 00:05:30.384 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:30.384 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:30.384 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.384 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:30.384 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.384 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:30.384 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.384 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:30.384 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.384 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:30.384 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.384 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:30.384 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.384 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:30.384 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.384 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:30.384 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.384 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:30.384 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:30.384 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.384 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:30.384 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.384 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:30.384 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.384 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:30.384 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.384 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:30.384 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.384 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:30.384 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.384 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:30.384 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:30.384 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.384 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:30.384 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.384 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.384 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:30.384 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:30.384 EAL: Hugepages will be freed exactly as allocated. 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: TSC frequency is ~2500000 KHz 00:05:30.384 EAL: Main lcore 0 is ready (tid=7f4948dcaa00;cpuset=[0]) 00:05:30.384 EAL: Trying to obtain current memory policy. 00:05:30.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.384 EAL: Restoring previous memory policy: 0 00:05:30.384 EAL: request: mp_malloc_sync 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: Heap on socket 0 was expanded by 2MB 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: Mem event callback 'spdk:(nil)' registered 00:05:30.384 00:05:30.384 00:05:30.384 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.384 http://cunit.sourceforge.net/ 00:05:30.384 00:05:30.384 00:05:30.384 Suite: components_suite 00:05:30.384 Test: vtophys_malloc_test ...passed 00:05:30.384 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:30.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.384 EAL: Restoring previous memory policy: 4 00:05:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.384 EAL: request: mp_malloc_sync 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: Heap on socket 0 was expanded by 4MB 00:05:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.384 EAL: request: mp_malloc_sync 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: Heap on socket 0 was shrunk by 4MB 00:05:30.384 EAL: Trying to obtain current memory policy. 00:05:30.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.384 EAL: Restoring previous memory policy: 4 00:05:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.384 EAL: request: mp_malloc_sync 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: Heap on socket 0 was expanded by 6MB 00:05:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.384 EAL: request: mp_malloc_sync 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: Heap on socket 0 was shrunk by 6MB 00:05:30.384 EAL: Trying to obtain current memory policy. 00:05:30.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.384 EAL: Restoring previous memory policy: 4 00:05:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.384 EAL: request: mp_malloc_sync 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: Heap on socket 0 was expanded by 10MB 00:05:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.384 EAL: request: mp_malloc_sync 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: Heap on socket 0 was shrunk by 10MB 00:05:30.384 EAL: Trying to obtain current memory policy. 00:05:30.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.384 EAL: Restoring previous memory policy: 4 00:05:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.384 EAL: request: mp_malloc_sync 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: Heap on socket 0 was expanded by 18MB 00:05:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.384 EAL: request: mp_malloc_sync 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: Heap on socket 0 was shrunk by 18MB 00:05:30.384 EAL: Trying to obtain current memory policy. 00:05:30.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.384 EAL: Restoring previous memory policy: 4 00:05:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.384 EAL: request: mp_malloc_sync 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: Heap on socket 0 was expanded by 34MB 00:05:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.384 EAL: request: mp_malloc_sync 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: Heap on socket 0 was shrunk by 34MB 00:05:30.384 EAL: Trying to obtain current memory policy. 00:05:30.384 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.384 EAL: Restoring previous memory policy: 4 00:05:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.384 EAL: request: mp_malloc_sync 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: Heap on socket 0 was expanded by 66MB 00:05:30.384 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.384 EAL: request: mp_malloc_sync 00:05:30.384 EAL: No shared files mode enabled, IPC is disabled 00:05:30.384 EAL: Heap on socket 0 was shrunk by 66MB 00:05:30.384 EAL: Trying to obtain current memory policy. 00:05:30.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.385 EAL: Restoring previous memory policy: 4 00:05:30.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.385 EAL: request: mp_malloc_sync 00:05:30.385 EAL: No shared files mode enabled, IPC is disabled 00:05:30.385 EAL: Heap on socket 0 was expanded by 130MB 00:05:30.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.385 EAL: request: mp_malloc_sync 00:05:30.385 EAL: No shared files mode enabled, IPC is disabled 00:05:30.385 EAL: Heap on socket 0 was shrunk by 130MB 00:05:30.385 EAL: Trying to obtain current memory policy. 00:05:30.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.385 EAL: Restoring previous memory policy: 4 00:05:30.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.385 EAL: request: mp_malloc_sync 00:05:30.385 EAL: No shared files mode enabled, IPC is disabled 00:05:30.385 EAL: Heap on socket 0 was expanded by 258MB 00:05:30.645 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.645 EAL: request: mp_malloc_sync 00:05:30.645 EAL: No shared files mode enabled, IPC is disabled 00:05:30.645 EAL: Heap on socket 0 was shrunk by 258MB 00:05:30.645 EAL: Trying to obtain current memory policy. 00:05:30.645 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.645 EAL: Restoring previous memory policy: 4 00:05:30.645 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.645 EAL: request: mp_malloc_sync 00:05:30.645 EAL: No shared files mode enabled, IPC is disabled 00:05:30.645 EAL: Heap on socket 0 was expanded by 514MB 00:05:30.645 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.904 EAL: request: mp_malloc_sync 00:05:30.904 EAL: No shared files mode enabled, IPC is disabled 00:05:30.904 EAL: Heap on socket 0 was shrunk by 514MB 00:05:30.904 EAL: Trying to obtain current memory policy. 00:05:30.904 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.904 EAL: Restoring previous memory policy: 4 00:05:30.904 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.904 EAL: request: mp_malloc_sync 00:05:30.904 EAL: No shared files mode enabled, IPC is disabled 00:05:30.904 EAL: Heap on socket 0 was expanded by 1026MB 00:05:31.164 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.423 EAL: request: mp_malloc_sync 00:05:31.423 EAL: No shared files mode enabled, IPC is disabled 00:05:31.423 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:31.423 passed 00:05:31.423 00:05:31.423 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.423 suites 1 1 n/a 0 0 00:05:31.423 tests 2 2 2 0 0 00:05:31.423 asserts 497 497 497 0 n/a 00:05:31.423 00:05:31.423 Elapsed time = 0.961 seconds 00:05:31.423 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.423 EAL: request: mp_malloc_sync 00:05:31.423 EAL: No shared files mode enabled, IPC is disabled 00:05:31.423 EAL: Heap on socket 0 was shrunk by 2MB 00:05:31.423 EAL: No shared files mode enabled, IPC is disabled 00:05:31.423 EAL: No shared files mode enabled, IPC is disabled 00:05:31.423 EAL: No shared files mode enabled, IPC is disabled 00:05:31.423 00:05:31.423 real 0m1.075s 00:05:31.423 user 0m0.618s 00:05:31.423 sys 0m0.434s 00:05:31.423 19:53:18 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.423 19:53:18 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:31.423 ************************************ 00:05:31.423 END TEST env_vtophys 00:05:31.423 ************************************ 00:05:31.423 19:53:18 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:31.423 19:53:18 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.423 19:53:18 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.423 19:53:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.423 ************************************ 00:05:31.423 START TEST env_pci 00:05:31.423 ************************************ 00:05:31.423 19:53:18 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:31.423 00:05:31.423 00:05:31.423 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.423 http://cunit.sourceforge.net/ 00:05:31.423 00:05:31.423 00:05:31.423 Suite: pci 00:05:31.423 Test: pci_hook ...[2024-07-13 19:53:18.962880] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3653225 has claimed it 00:05:31.423 EAL: Cannot find device (10000:00:01.0) 00:05:31.423 EAL: Failed to attach device on primary process 00:05:31.423 passed 00:05:31.423 00:05:31.423 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.423 suites 1 1 n/a 0 0 00:05:31.423 tests 1 1 1 0 0 00:05:31.423 asserts 25 25 25 0 n/a 00:05:31.423 00:05:31.423 Elapsed time = 0.038 seconds 00:05:31.423 00:05:31.423 real 0m0.057s 00:05:31.423 user 0m0.010s 00:05:31.423 sys 0m0.046s 00:05:31.423 19:53:19 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.423 19:53:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:31.423 ************************************ 00:05:31.423 END TEST env_pci 00:05:31.423 ************************************ 00:05:31.423 19:53:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:31.423 19:53:19 env -- env/env.sh@15 -- # uname 00:05:31.423 19:53:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:31.423 19:53:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:31.423 19:53:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.423 19:53:19 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:31.423 19:53:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.423 19:53:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.683 ************************************ 00:05:31.683 START TEST env_dpdk_post_init 00:05:31.683 ************************************ 00:05:31.683 19:53:19 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.683 EAL: Detected CPU lcores: 112 00:05:31.683 EAL: Detected NUMA nodes: 2 00:05:31.683 EAL: Detected static linkage of DPDK 00:05:31.683 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.683 EAL: Selected IOVA mode 'VA' 00:05:31.683 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.683 EAL: VFIO support initialized 00:05:31.683 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.683 EAL: Using IOMMU type 1 (Type 1) 00:05:32.619 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:35.901 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:35.901 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:05:36.467 Starting DPDK initialization... 00:05:36.467 Starting SPDK post initialization... 00:05:36.467 SPDK NVMe probe 00:05:36.467 Attaching to 0000:d8:00.0 00:05:36.467 Attached to 0000:d8:00.0 00:05:36.467 Cleaning up... 00:05:36.467 00:05:36.467 real 0m4.754s 00:05:36.467 user 0m3.588s 00:05:36.467 sys 0m0.411s 00:05:36.467 19:53:23 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.467 19:53:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.467 ************************************ 00:05:36.467 END TEST env_dpdk_post_init 00:05:36.467 ************************************ 00:05:36.467 19:53:23 env -- env/env.sh@26 -- # uname 00:05:36.467 19:53:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:36.467 19:53:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.467 19:53:23 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.467 19:53:23 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.467 19:53:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.467 ************************************ 00:05:36.467 START TEST env_mem_callbacks 00:05:36.467 ************************************ 00:05:36.467 19:53:23 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.467 EAL: Detected CPU lcores: 112 00:05:36.467 EAL: Detected NUMA nodes: 2 00:05:36.467 EAL: Detected static linkage of DPDK 00:05:36.467 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.467 EAL: Selected IOVA mode 'VA' 00:05:36.467 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.467 EAL: VFIO support initialized 00:05:36.467 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.467 00:05:36.467 00:05:36.467 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.467 http://cunit.sourceforge.net/ 00:05:36.467 00:05:36.467 00:05:36.467 Suite: memory 00:05:36.467 Test: test ... 00:05:36.467 register 0x200000200000 2097152 00:05:36.467 malloc 3145728 00:05:36.467 register 0x200000400000 4194304 00:05:36.467 buf 0x200000500000 len 3145728 PASSED 00:05:36.467 malloc 64 00:05:36.467 buf 0x2000004fff40 len 64 PASSED 00:05:36.467 malloc 4194304 00:05:36.467 register 0x200000800000 6291456 00:05:36.467 buf 0x200000a00000 len 4194304 PASSED 00:05:36.467 free 0x200000500000 3145728 00:05:36.467 free 0x2000004fff40 64 00:05:36.467 unregister 0x200000400000 4194304 PASSED 00:05:36.467 free 0x200000a00000 4194304 00:05:36.467 unregister 0x200000800000 6291456 PASSED 00:05:36.467 malloc 8388608 00:05:36.467 register 0x200000400000 10485760 00:05:36.467 buf 0x200000600000 len 8388608 PASSED 00:05:36.467 free 0x200000600000 8388608 00:05:36.467 unregister 0x200000400000 10485760 PASSED 00:05:36.467 passed 00:05:36.467 00:05:36.467 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.467 suites 1 1 n/a 0 0 00:05:36.467 tests 1 1 1 0 0 00:05:36.467 asserts 15 15 15 0 n/a 00:05:36.467 00:05:36.467 Elapsed time = 0.005 seconds 00:05:36.467 00:05:36.467 real 0m0.064s 00:05:36.467 user 0m0.016s 00:05:36.467 sys 0m0.048s 00:05:36.467 19:53:23 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.467 19:53:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:36.467 ************************************ 00:05:36.467 END TEST env_mem_callbacks 00:05:36.467 ************************************ 00:05:36.467 00:05:36.467 real 0m6.564s 00:05:36.467 user 0m4.524s 00:05:36.467 sys 0m1.302s 00:05:36.467 19:53:24 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.467 19:53:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.467 ************************************ 00:05:36.467 END TEST env 00:05:36.467 ************************************ 00:05:36.467 19:53:24 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:36.467 19:53:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.467 19:53:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.467 19:53:24 -- common/autotest_common.sh@10 -- # set +x 00:05:36.467 ************************************ 00:05:36.467 START TEST rpc 00:05:36.467 ************************************ 00:05:36.467 19:53:24 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:36.726 * Looking for test storage... 00:05:36.726 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:36.726 19:53:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3654388 00:05:36.726 19:53:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.726 19:53:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3654388 00:05:36.726 19:53:24 rpc -- common/autotest_common.sh@827 -- # '[' -z 3654388 ']' 00:05:36.726 19:53:24 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.726 19:53:24 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:36.726 19:53:24 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.726 19:53:24 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:36.726 19:53:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.726 19:53:24 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:36.726 [2024-07-13 19:53:24.254473] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:36.726 [2024-07-13 19:53:24.254557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3654388 ] 00:05:36.726 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.726 [2024-07-13 19:53:24.322712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.726 [2024-07-13 19:53:24.361730] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:36.726 [2024-07-13 19:53:24.361767] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3654388' to capture a snapshot of events at runtime. 00:05:36.726 [2024-07-13 19:53:24.361776] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:36.726 [2024-07-13 19:53:24.361785] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:36.726 [2024-07-13 19:53:24.361792] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3654388 for offline analysis/debug. 00:05:36.726 [2024-07-13 19:53:24.361811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.985 19:53:24 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:36.985 19:53:24 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:36.985 19:53:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:36.985 19:53:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:36.985 19:53:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:36.985 19:53:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:36.985 19:53:24 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.985 19:53:24 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.985 19:53:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.985 ************************************ 00:05:36.985 START TEST rpc_integrity 00:05:36.985 ************************************ 00:05:36.985 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:36.985 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:36.985 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.985 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.985 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.985 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:36.985 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:36.985 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:36.985 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:36.985 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.985 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.985 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.985 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:36.985 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:36.985 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.985 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.243 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.243 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:37.243 { 00:05:37.243 "name": "Malloc0", 00:05:37.243 "aliases": [ 00:05:37.243 "00132407-9dad-4595-bc22-9481a154f8c0" 00:05:37.243 ], 00:05:37.243 "product_name": "Malloc disk", 00:05:37.243 "block_size": 512, 00:05:37.243 "num_blocks": 16384, 00:05:37.243 "uuid": "00132407-9dad-4595-bc22-9481a154f8c0", 00:05:37.243 "assigned_rate_limits": { 00:05:37.243 "rw_ios_per_sec": 0, 00:05:37.243 "rw_mbytes_per_sec": 0, 00:05:37.243 "r_mbytes_per_sec": 0, 00:05:37.243 "w_mbytes_per_sec": 0 00:05:37.243 }, 00:05:37.243 "claimed": false, 00:05:37.244 "zoned": false, 00:05:37.244 "supported_io_types": { 00:05:37.244 "read": true, 00:05:37.244 "write": true, 00:05:37.244 "unmap": true, 00:05:37.244 "write_zeroes": true, 00:05:37.244 "flush": true, 00:05:37.244 "reset": true, 00:05:37.244 "compare": false, 00:05:37.244 "compare_and_write": false, 00:05:37.244 "abort": true, 00:05:37.244 "nvme_admin": false, 00:05:37.244 "nvme_io": false 00:05:37.244 }, 00:05:37.244 "memory_domains": [ 00:05:37.244 { 00:05:37.244 "dma_device_id": "system", 00:05:37.244 "dma_device_type": 1 00:05:37.244 }, 00:05:37.244 { 00:05:37.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.244 "dma_device_type": 2 00:05:37.244 } 00:05:37.244 ], 00:05:37.244 "driver_specific": {} 00:05:37.244 } 00:05:37.244 ]' 00:05:37.244 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:37.244 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:37.244 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.244 [2024-07-13 19:53:24.705792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:37.244 [2024-07-13 19:53:24.705822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:37.244 [2024-07-13 19:53:24.705837] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x46fccb0 00:05:37.244 [2024-07-13 19:53:24.705847] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:37.244 [2024-07-13 19:53:24.706640] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:37.244 [2024-07-13 19:53:24.706661] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:37.244 Passthru0 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.244 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.244 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:37.244 { 00:05:37.244 "name": "Malloc0", 00:05:37.244 "aliases": [ 00:05:37.244 "00132407-9dad-4595-bc22-9481a154f8c0" 00:05:37.244 ], 00:05:37.244 "product_name": "Malloc disk", 00:05:37.244 "block_size": 512, 00:05:37.244 "num_blocks": 16384, 00:05:37.244 "uuid": "00132407-9dad-4595-bc22-9481a154f8c0", 00:05:37.244 "assigned_rate_limits": { 00:05:37.244 "rw_ios_per_sec": 0, 00:05:37.244 "rw_mbytes_per_sec": 0, 00:05:37.244 "r_mbytes_per_sec": 0, 00:05:37.244 "w_mbytes_per_sec": 0 00:05:37.244 }, 00:05:37.244 "claimed": true, 00:05:37.244 "claim_type": "exclusive_write", 00:05:37.244 "zoned": false, 00:05:37.244 "supported_io_types": { 00:05:37.244 "read": true, 00:05:37.244 "write": true, 00:05:37.244 "unmap": true, 00:05:37.244 "write_zeroes": true, 00:05:37.244 "flush": true, 00:05:37.244 "reset": true, 00:05:37.244 "compare": false, 00:05:37.244 "compare_and_write": false, 00:05:37.244 "abort": true, 00:05:37.244 "nvme_admin": false, 00:05:37.244 "nvme_io": false 00:05:37.244 }, 00:05:37.244 "memory_domains": [ 00:05:37.244 { 00:05:37.244 "dma_device_id": "system", 00:05:37.244 "dma_device_type": 1 00:05:37.244 }, 00:05:37.244 { 00:05:37.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.244 "dma_device_type": 2 00:05:37.244 } 00:05:37.244 ], 00:05:37.244 "driver_specific": {} 00:05:37.244 }, 00:05:37.244 { 00:05:37.244 "name": "Passthru0", 00:05:37.244 "aliases": [ 00:05:37.244 "154358f2-2901-5800-a972-4fe69e413d14" 00:05:37.244 ], 00:05:37.244 "product_name": "passthru", 00:05:37.244 "block_size": 512, 00:05:37.244 "num_blocks": 16384, 00:05:37.244 "uuid": "154358f2-2901-5800-a972-4fe69e413d14", 00:05:37.244 "assigned_rate_limits": { 00:05:37.244 "rw_ios_per_sec": 0, 00:05:37.244 "rw_mbytes_per_sec": 0, 00:05:37.244 "r_mbytes_per_sec": 0, 00:05:37.244 "w_mbytes_per_sec": 0 00:05:37.244 }, 00:05:37.244 "claimed": false, 00:05:37.244 "zoned": false, 00:05:37.244 "supported_io_types": { 00:05:37.244 "read": true, 00:05:37.244 "write": true, 00:05:37.244 "unmap": true, 00:05:37.244 "write_zeroes": true, 00:05:37.244 "flush": true, 00:05:37.244 "reset": true, 00:05:37.244 "compare": false, 00:05:37.244 "compare_and_write": false, 00:05:37.244 "abort": true, 00:05:37.244 "nvme_admin": false, 00:05:37.244 "nvme_io": false 00:05:37.244 }, 00:05:37.244 "memory_domains": [ 00:05:37.244 { 00:05:37.244 "dma_device_id": "system", 00:05:37.244 "dma_device_type": 1 00:05:37.244 }, 00:05:37.244 { 00:05:37.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.244 "dma_device_type": 2 00:05:37.244 } 00:05:37.244 ], 00:05:37.244 "driver_specific": { 00:05:37.244 "passthru": { 00:05:37.244 "name": "Passthru0", 00:05:37.244 "base_bdev_name": "Malloc0" 00:05:37.244 } 00:05:37.244 } 00:05:37.244 } 00:05:37.244 ]' 00:05:37.244 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:37.244 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:37.244 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.244 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.244 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.244 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.244 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:37.244 19:53:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.244 00:05:37.244 real 0m0.274s 00:05:37.244 user 0m0.173s 00:05:37.244 sys 0m0.039s 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.244 19:53:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.244 ************************************ 00:05:37.244 END TEST rpc_integrity 00:05:37.244 ************************************ 00:05:37.244 19:53:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:37.244 19:53:24 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.244 19:53:24 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.244 19:53:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.503 ************************************ 00:05:37.504 START TEST rpc_plugins 00:05:37.504 ************************************ 00:05:37.504 19:53:24 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:37.504 19:53:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:37.504 19:53:24 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.504 19:53:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 19:53:24 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.504 19:53:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:37.504 19:53:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:37.504 19:53:24 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.504 19:53:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 19:53:24 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.504 19:53:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:37.504 { 00:05:37.504 "name": "Malloc1", 00:05:37.504 "aliases": [ 00:05:37.504 "bfd35e06-9b64-482c-9d06-d85723148a3a" 00:05:37.504 ], 00:05:37.504 "product_name": "Malloc disk", 00:05:37.504 "block_size": 4096, 00:05:37.504 "num_blocks": 256, 00:05:37.504 "uuid": "bfd35e06-9b64-482c-9d06-d85723148a3a", 00:05:37.504 "assigned_rate_limits": { 00:05:37.504 "rw_ios_per_sec": 0, 00:05:37.504 "rw_mbytes_per_sec": 0, 00:05:37.504 "r_mbytes_per_sec": 0, 00:05:37.504 "w_mbytes_per_sec": 0 00:05:37.504 }, 00:05:37.504 "claimed": false, 00:05:37.504 "zoned": false, 00:05:37.504 "supported_io_types": { 00:05:37.504 "read": true, 00:05:37.504 "write": true, 00:05:37.504 "unmap": true, 00:05:37.504 "write_zeroes": true, 00:05:37.504 "flush": true, 00:05:37.504 "reset": true, 00:05:37.504 "compare": false, 00:05:37.504 "compare_and_write": false, 00:05:37.504 "abort": true, 00:05:37.504 "nvme_admin": false, 00:05:37.504 "nvme_io": false 00:05:37.504 }, 00:05:37.504 "memory_domains": [ 00:05:37.504 { 00:05:37.504 "dma_device_id": "system", 00:05:37.504 "dma_device_type": 1 00:05:37.504 }, 00:05:37.504 { 00:05:37.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.504 "dma_device_type": 2 00:05:37.504 } 00:05:37.504 ], 00:05:37.504 "driver_specific": {} 00:05:37.504 } 00:05:37.504 ]' 00:05:37.504 19:53:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:37.504 19:53:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:37.504 19:53:24 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:37.504 19:53:24 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.504 19:53:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.504 19:53:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:37.504 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.504 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.504 19:53:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:37.504 19:53:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:37.504 19:53:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:37.504 00:05:37.504 real 0m0.130s 00:05:37.504 user 0m0.076s 00:05:37.504 sys 0m0.018s 00:05:37.504 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.504 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 ************************************ 00:05:37.504 END TEST rpc_plugins 00:05:37.504 ************************************ 00:05:37.504 19:53:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:37.504 19:53:25 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.504 19:53:25 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.504 19:53:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 ************************************ 00:05:37.504 START TEST rpc_trace_cmd_test 00:05:37.504 ************************************ 00:05:37.504 19:53:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:37.504 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:37.504 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:37.504 19:53:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.504 19:53:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 19:53:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.504 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:37.504 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3654388", 00:05:37.504 "tpoint_group_mask": "0x8", 00:05:37.504 "iscsi_conn": { 00:05:37.504 "mask": "0x2", 00:05:37.504 "tpoint_mask": "0x0" 00:05:37.504 }, 00:05:37.504 "scsi": { 00:05:37.504 "mask": "0x4", 00:05:37.504 "tpoint_mask": "0x0" 00:05:37.504 }, 00:05:37.504 "bdev": { 00:05:37.504 "mask": "0x8", 00:05:37.504 "tpoint_mask": "0xffffffffffffffff" 00:05:37.504 }, 00:05:37.504 "nvmf_rdma": { 00:05:37.504 "mask": "0x10", 00:05:37.504 "tpoint_mask": "0x0" 00:05:37.504 }, 00:05:37.504 "nvmf_tcp": { 00:05:37.504 "mask": "0x20", 00:05:37.504 "tpoint_mask": "0x0" 00:05:37.504 }, 00:05:37.504 "ftl": { 00:05:37.504 "mask": "0x40", 00:05:37.504 "tpoint_mask": "0x0" 00:05:37.504 }, 00:05:37.504 "blobfs": { 00:05:37.504 "mask": "0x80", 00:05:37.504 "tpoint_mask": "0x0" 00:05:37.504 }, 00:05:37.504 "dsa": { 00:05:37.504 "mask": "0x200", 00:05:37.504 "tpoint_mask": "0x0" 00:05:37.504 }, 00:05:37.504 "thread": { 00:05:37.504 "mask": "0x400", 00:05:37.504 "tpoint_mask": "0x0" 00:05:37.504 }, 00:05:37.504 "nvme_pcie": { 00:05:37.504 "mask": "0x800", 00:05:37.504 "tpoint_mask": "0x0" 00:05:37.504 }, 00:05:37.504 "iaa": { 00:05:37.504 "mask": "0x1000", 00:05:37.504 "tpoint_mask": "0x0" 00:05:37.504 }, 00:05:37.504 "nvme_tcp": { 00:05:37.504 "mask": "0x2000", 00:05:37.504 "tpoint_mask": "0x0" 00:05:37.504 }, 00:05:37.504 "bdev_nvme": { 00:05:37.504 "mask": "0x4000", 00:05:37.504 "tpoint_mask": "0x0" 00:05:37.504 }, 00:05:37.504 "sock": { 00:05:37.504 "mask": "0x8000", 00:05:37.504 "tpoint_mask": "0x0" 00:05:37.504 } 00:05:37.504 }' 00:05:37.504 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:37.763 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:37.763 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:37.763 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:37.763 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:37.763 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:37.763 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:37.763 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:37.763 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:37.763 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:37.763 00:05:37.763 real 0m0.218s 00:05:37.763 user 0m0.179s 00:05:37.763 sys 0m0.034s 00:05:37.763 19:53:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.763 19:53:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.763 ************************************ 00:05:37.763 END TEST rpc_trace_cmd_test 00:05:37.763 ************************************ 00:05:37.763 19:53:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:37.763 19:53:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:37.763 19:53:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:37.763 19:53:25 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.763 19:53:25 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.763 19:53:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.022 ************************************ 00:05:38.022 START TEST rpc_daemon_integrity 00:05:38.022 ************************************ 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.022 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:38.022 { 00:05:38.022 "name": "Malloc2", 00:05:38.022 "aliases": [ 00:05:38.022 "65333e07-1bd1-4e78-92b9-f5973e7ee0df" 00:05:38.022 ], 00:05:38.022 "product_name": "Malloc disk", 00:05:38.022 "block_size": 512, 00:05:38.022 "num_blocks": 16384, 00:05:38.022 "uuid": "65333e07-1bd1-4e78-92b9-f5973e7ee0df", 00:05:38.022 "assigned_rate_limits": { 00:05:38.022 "rw_ios_per_sec": 0, 00:05:38.022 "rw_mbytes_per_sec": 0, 00:05:38.022 "r_mbytes_per_sec": 0, 00:05:38.022 "w_mbytes_per_sec": 0 00:05:38.022 }, 00:05:38.022 "claimed": false, 00:05:38.022 "zoned": false, 00:05:38.022 "supported_io_types": { 00:05:38.022 "read": true, 00:05:38.022 "write": true, 00:05:38.022 "unmap": true, 00:05:38.022 "write_zeroes": true, 00:05:38.022 "flush": true, 00:05:38.022 "reset": true, 00:05:38.022 "compare": false, 00:05:38.022 "compare_and_write": false, 00:05:38.022 "abort": true, 00:05:38.022 "nvme_admin": false, 00:05:38.022 "nvme_io": false 00:05:38.022 }, 00:05:38.022 "memory_domains": [ 00:05:38.022 { 00:05:38.022 "dma_device_id": "system", 00:05:38.022 "dma_device_type": 1 00:05:38.022 }, 00:05:38.022 { 00:05:38.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.022 "dma_device_type": 2 00:05:38.022 } 00:05:38.022 ], 00:05:38.022 "driver_specific": {} 00:05:38.022 } 00:05:38.022 ]' 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.023 [2024-07-13 19:53:25.568018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:38.023 [2024-07-13 19:53:25.568045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.023 [2024-07-13 19:53:25.568060] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x46ee570 00:05:38.023 [2024-07-13 19:53:25.568073] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.023 [2024-07-13 19:53:25.568756] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.023 [2024-07-13 19:53:25.568776] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:38.023 Passthru0 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:38.023 { 00:05:38.023 "name": "Malloc2", 00:05:38.023 "aliases": [ 00:05:38.023 "65333e07-1bd1-4e78-92b9-f5973e7ee0df" 00:05:38.023 ], 00:05:38.023 "product_name": "Malloc disk", 00:05:38.023 "block_size": 512, 00:05:38.023 "num_blocks": 16384, 00:05:38.023 "uuid": "65333e07-1bd1-4e78-92b9-f5973e7ee0df", 00:05:38.023 "assigned_rate_limits": { 00:05:38.023 "rw_ios_per_sec": 0, 00:05:38.023 "rw_mbytes_per_sec": 0, 00:05:38.023 "r_mbytes_per_sec": 0, 00:05:38.023 "w_mbytes_per_sec": 0 00:05:38.023 }, 00:05:38.023 "claimed": true, 00:05:38.023 "claim_type": "exclusive_write", 00:05:38.023 "zoned": false, 00:05:38.023 "supported_io_types": { 00:05:38.023 "read": true, 00:05:38.023 "write": true, 00:05:38.023 "unmap": true, 00:05:38.023 "write_zeroes": true, 00:05:38.023 "flush": true, 00:05:38.023 "reset": true, 00:05:38.023 "compare": false, 00:05:38.023 "compare_and_write": false, 00:05:38.023 "abort": true, 00:05:38.023 "nvme_admin": false, 00:05:38.023 "nvme_io": false 00:05:38.023 }, 00:05:38.023 "memory_domains": [ 00:05:38.023 { 00:05:38.023 "dma_device_id": "system", 00:05:38.023 "dma_device_type": 1 00:05:38.023 }, 00:05:38.023 { 00:05:38.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.023 "dma_device_type": 2 00:05:38.023 } 00:05:38.023 ], 00:05:38.023 "driver_specific": {} 00:05:38.023 }, 00:05:38.023 { 00:05:38.023 "name": "Passthru0", 00:05:38.023 "aliases": [ 00:05:38.023 "971c8409-45fb-57cc-860c-767d5379fbab" 00:05:38.023 ], 00:05:38.023 "product_name": "passthru", 00:05:38.023 "block_size": 512, 00:05:38.023 "num_blocks": 16384, 00:05:38.023 "uuid": "971c8409-45fb-57cc-860c-767d5379fbab", 00:05:38.023 "assigned_rate_limits": { 00:05:38.023 "rw_ios_per_sec": 0, 00:05:38.023 "rw_mbytes_per_sec": 0, 00:05:38.023 "r_mbytes_per_sec": 0, 00:05:38.023 "w_mbytes_per_sec": 0 00:05:38.023 }, 00:05:38.023 "claimed": false, 00:05:38.023 "zoned": false, 00:05:38.023 "supported_io_types": { 00:05:38.023 "read": true, 00:05:38.023 "write": true, 00:05:38.023 "unmap": true, 00:05:38.023 "write_zeroes": true, 00:05:38.023 "flush": true, 00:05:38.023 "reset": true, 00:05:38.023 "compare": false, 00:05:38.023 "compare_and_write": false, 00:05:38.023 "abort": true, 00:05:38.023 "nvme_admin": false, 00:05:38.023 "nvme_io": false 00:05:38.023 }, 00:05:38.023 "memory_domains": [ 00:05:38.023 { 00:05:38.023 "dma_device_id": "system", 00:05:38.023 "dma_device_type": 1 00:05:38.023 }, 00:05:38.023 { 00:05:38.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.023 "dma_device_type": 2 00:05:38.023 } 00:05:38.023 ], 00:05:38.023 "driver_specific": { 00:05:38.023 "passthru": { 00:05:38.023 "name": "Passthru0", 00:05:38.023 "base_bdev_name": "Malloc2" 00:05:38.023 } 00:05:38.023 } 00:05:38.023 } 00:05:38.023 ]' 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:38.023 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:38.282 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:38.282 00:05:38.282 real 0m0.281s 00:05:38.282 user 0m0.176s 00:05:38.282 sys 0m0.041s 00:05:38.282 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.282 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.282 ************************************ 00:05:38.282 END TEST rpc_daemon_integrity 00:05:38.282 ************************************ 00:05:38.282 19:53:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:38.282 19:53:25 rpc -- rpc/rpc.sh@84 -- # killprocess 3654388 00:05:38.283 19:53:25 rpc -- common/autotest_common.sh@946 -- # '[' -z 3654388 ']' 00:05:38.283 19:53:25 rpc -- common/autotest_common.sh@950 -- # kill -0 3654388 00:05:38.283 19:53:25 rpc -- common/autotest_common.sh@951 -- # uname 00:05:38.283 19:53:25 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:38.283 19:53:25 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3654388 00:05:38.283 19:53:25 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:38.283 19:53:25 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:38.283 19:53:25 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3654388' 00:05:38.283 killing process with pid 3654388 00:05:38.283 19:53:25 rpc -- common/autotest_common.sh@965 -- # kill 3654388 00:05:38.283 19:53:25 rpc -- common/autotest_common.sh@970 -- # wait 3654388 00:05:38.542 00:05:38.542 real 0m1.976s 00:05:38.542 user 0m2.537s 00:05:38.542 sys 0m0.732s 00:05:38.542 19:53:26 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.542 19:53:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.542 ************************************ 00:05:38.542 END TEST rpc 00:05:38.542 ************************************ 00:05:38.542 19:53:26 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:38.542 19:53:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.542 19:53:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.542 19:53:26 -- common/autotest_common.sh@10 -- # set +x 00:05:38.542 ************************************ 00:05:38.542 START TEST skip_rpc 00:05:38.542 ************************************ 00:05:38.542 19:53:26 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:38.801 * Looking for test storage... 00:05:38.801 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:38.801 19:53:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:38.801 19:53:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:38.801 19:53:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:38.801 19:53:26 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.801 19:53:26 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.801 19:53:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.801 ************************************ 00:05:38.801 START TEST skip_rpc 00:05:38.802 ************************************ 00:05:38.802 19:53:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:38.802 19:53:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3654840 00:05:38.802 19:53:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.802 19:53:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:38.802 19:53:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:38.802 [2024-07-13 19:53:26.339909] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:38.802 [2024-07-13 19:53:26.339964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3654840 ] 00:05:38.802 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.802 [2024-07-13 19:53:26.406570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.802 [2024-07-13 19:53:26.444225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3654840 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3654840 ']' 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3654840 00:05:44.073 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:44.074 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:44.074 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3654840 00:05:44.074 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:44.074 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:44.074 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3654840' 00:05:44.074 killing process with pid 3654840 00:05:44.074 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3654840 00:05:44.074 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3654840 00:05:44.074 00:05:44.074 real 0m5.361s 00:05:44.074 user 0m5.123s 00:05:44.074 sys 0m0.281s 00:05:44.074 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.074 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.074 ************************************ 00:05:44.074 END TEST skip_rpc 00:05:44.074 ************************************ 00:05:44.074 19:53:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:44.074 19:53:31 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.074 19:53:31 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.074 19:53:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.332 ************************************ 00:05:44.332 START TEST skip_rpc_with_json 00:05:44.332 ************************************ 00:05:44.332 19:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:44.332 19:53:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:44.332 19:53:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3655878 00:05:44.332 19:53:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.332 19:53:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.332 19:53:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3655878 00:05:44.332 19:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3655878 ']' 00:05:44.332 19:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.332 19:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:44.332 19:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.332 19:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:44.332 19:53:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.332 [2024-07-13 19:53:31.782707] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:44.333 [2024-07-13 19:53:31.782786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3655878 ] 00:05:44.333 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.333 [2024-07-13 19:53:31.851920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.333 [2024-07-13 19:53:31.891097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.592 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:44.592 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:44.592 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:44.592 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.592 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.592 [2024-07-13 19:53:32.078731] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:44.592 request: 00:05:44.592 { 00:05:44.592 "trtype": "tcp", 00:05:44.592 "method": "nvmf_get_transports", 00:05:44.592 "req_id": 1 00:05:44.592 } 00:05:44.592 Got JSON-RPC error response 00:05:44.592 response: 00:05:44.592 { 00:05:44.592 "code": -19, 00:05:44.592 "message": "No such device" 00:05:44.592 } 00:05:44.592 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:44.592 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:44.592 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.592 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.592 [2024-07-13 19:53:32.090822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:44.592 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.592 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:44.592 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.592 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.851 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.851 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:44.851 { 00:05:44.851 "subsystems": [ 00:05:44.851 { 00:05:44.851 "subsystem": "scheduler", 00:05:44.851 "config": [ 00:05:44.851 { 00:05:44.851 "method": "framework_set_scheduler", 00:05:44.851 "params": { 00:05:44.851 "name": "static" 00:05:44.851 } 00:05:44.851 } 00:05:44.851 ] 00:05:44.851 }, 00:05:44.851 { 00:05:44.851 "subsystem": "vmd", 00:05:44.851 "config": [] 00:05:44.851 }, 00:05:44.851 { 00:05:44.851 "subsystem": "sock", 00:05:44.851 "config": [ 00:05:44.851 { 00:05:44.851 "method": "sock_set_default_impl", 00:05:44.851 "params": { 00:05:44.851 "impl_name": "posix" 00:05:44.851 } 00:05:44.851 }, 00:05:44.851 { 00:05:44.851 "method": "sock_impl_set_options", 00:05:44.851 "params": { 00:05:44.851 "impl_name": "ssl", 00:05:44.851 "recv_buf_size": 4096, 00:05:44.851 "send_buf_size": 4096, 00:05:44.851 "enable_recv_pipe": true, 00:05:44.851 "enable_quickack": false, 00:05:44.851 "enable_placement_id": 0, 00:05:44.851 "enable_zerocopy_send_server": true, 00:05:44.851 "enable_zerocopy_send_client": false, 00:05:44.851 "zerocopy_threshold": 0, 00:05:44.851 "tls_version": 0, 00:05:44.851 "enable_ktls": false 00:05:44.851 } 00:05:44.851 }, 00:05:44.851 { 00:05:44.851 "method": "sock_impl_set_options", 00:05:44.851 "params": { 00:05:44.851 "impl_name": "posix", 00:05:44.851 "recv_buf_size": 2097152, 00:05:44.851 "send_buf_size": 2097152, 00:05:44.851 "enable_recv_pipe": true, 00:05:44.851 "enable_quickack": false, 00:05:44.851 "enable_placement_id": 0, 00:05:44.851 "enable_zerocopy_send_server": true, 00:05:44.851 "enable_zerocopy_send_client": false, 00:05:44.851 "zerocopy_threshold": 0, 00:05:44.851 "tls_version": 0, 00:05:44.851 "enable_ktls": false 00:05:44.851 } 00:05:44.851 } 00:05:44.851 ] 00:05:44.851 }, 00:05:44.851 { 00:05:44.851 "subsystem": "iobuf", 00:05:44.851 "config": [ 00:05:44.851 { 00:05:44.851 "method": "iobuf_set_options", 00:05:44.851 "params": { 00:05:44.851 "small_pool_count": 8192, 00:05:44.851 "large_pool_count": 1024, 00:05:44.851 "small_bufsize": 8192, 00:05:44.851 "large_bufsize": 135168 00:05:44.851 } 00:05:44.851 } 00:05:44.851 ] 00:05:44.851 }, 00:05:44.851 { 00:05:44.851 "subsystem": "keyring", 00:05:44.851 "config": [] 00:05:44.851 }, 00:05:44.851 { 00:05:44.851 "subsystem": "vfio_user_target", 00:05:44.851 "config": null 00:05:44.851 }, 00:05:44.851 { 00:05:44.851 "subsystem": "accel", 00:05:44.851 "config": [ 00:05:44.851 { 00:05:44.851 "method": "accel_set_options", 00:05:44.851 "params": { 00:05:44.851 "small_cache_size": 128, 00:05:44.851 "large_cache_size": 16, 00:05:44.851 "task_count": 2048, 00:05:44.851 "sequence_count": 2048, 00:05:44.851 "buf_count": 2048 00:05:44.851 } 00:05:44.851 } 00:05:44.851 ] 00:05:44.851 }, 00:05:44.851 { 00:05:44.851 "subsystem": "bdev", 00:05:44.851 "config": [ 00:05:44.851 { 00:05:44.851 "method": "bdev_set_options", 00:05:44.851 "params": { 00:05:44.851 "bdev_io_pool_size": 65535, 00:05:44.851 "bdev_io_cache_size": 256, 00:05:44.851 "bdev_auto_examine": true, 00:05:44.851 "iobuf_small_cache_size": 128, 00:05:44.851 "iobuf_large_cache_size": 16 00:05:44.851 } 00:05:44.851 }, 00:05:44.851 { 00:05:44.851 "method": "bdev_raid_set_options", 00:05:44.851 "params": { 00:05:44.851 "process_window_size_kb": 1024 00:05:44.851 } 00:05:44.851 }, 00:05:44.851 { 00:05:44.851 "method": "bdev_nvme_set_options", 00:05:44.851 "params": { 00:05:44.851 "action_on_timeout": "none", 00:05:44.851 "timeout_us": 0, 00:05:44.851 "timeout_admin_us": 0, 00:05:44.851 "keep_alive_timeout_ms": 10000, 00:05:44.851 "arbitration_burst": 0, 00:05:44.851 "low_priority_weight": 0, 00:05:44.851 "medium_priority_weight": 0, 00:05:44.851 "high_priority_weight": 0, 00:05:44.851 "nvme_adminq_poll_period_us": 10000, 00:05:44.851 "nvme_ioq_poll_period_us": 0, 00:05:44.851 "io_queue_requests": 0, 00:05:44.851 "delay_cmd_submit": true, 00:05:44.851 "transport_retry_count": 4, 00:05:44.851 "bdev_retry_count": 3, 00:05:44.851 "transport_ack_timeout": 0, 00:05:44.851 "ctrlr_loss_timeout_sec": 0, 00:05:44.851 "reconnect_delay_sec": 0, 00:05:44.851 "fast_io_fail_timeout_sec": 0, 00:05:44.851 "disable_auto_failback": false, 00:05:44.851 "generate_uuids": false, 00:05:44.851 "transport_tos": 0, 00:05:44.851 "nvme_error_stat": false, 00:05:44.851 "rdma_srq_size": 0, 00:05:44.851 "io_path_stat": false, 00:05:44.851 "allow_accel_sequence": false, 00:05:44.851 "rdma_max_cq_size": 0, 00:05:44.851 "rdma_cm_event_timeout_ms": 0, 00:05:44.851 "dhchap_digests": [ 00:05:44.851 "sha256", 00:05:44.851 "sha384", 00:05:44.851 "sha512" 00:05:44.851 ], 00:05:44.851 "dhchap_dhgroups": [ 00:05:44.851 "null", 00:05:44.851 "ffdhe2048", 00:05:44.851 "ffdhe3072", 00:05:44.851 "ffdhe4096", 00:05:44.851 "ffdhe6144", 00:05:44.852 "ffdhe8192" 00:05:44.852 ] 00:05:44.852 } 00:05:44.852 }, 00:05:44.852 { 00:05:44.852 "method": "bdev_nvme_set_hotplug", 00:05:44.852 "params": { 00:05:44.852 "period_us": 100000, 00:05:44.852 "enable": false 00:05:44.852 } 00:05:44.852 }, 00:05:44.852 { 00:05:44.852 "method": "bdev_iscsi_set_options", 00:05:44.852 "params": { 00:05:44.852 "timeout_sec": 30 00:05:44.852 } 00:05:44.852 }, 00:05:44.852 { 00:05:44.852 "method": "bdev_wait_for_examine" 00:05:44.852 } 00:05:44.852 ] 00:05:44.852 }, 00:05:44.852 { 00:05:44.852 "subsystem": "nvmf", 00:05:44.852 "config": [ 00:05:44.852 { 00:05:44.852 "method": "nvmf_set_config", 00:05:44.852 "params": { 00:05:44.852 "discovery_filter": "match_any", 00:05:44.852 "admin_cmd_passthru": { 00:05:44.852 "identify_ctrlr": false 00:05:44.852 } 00:05:44.852 } 00:05:44.852 }, 00:05:44.852 { 00:05:44.852 "method": "nvmf_set_max_subsystems", 00:05:44.852 "params": { 00:05:44.852 "max_subsystems": 1024 00:05:44.852 } 00:05:44.852 }, 00:05:44.852 { 00:05:44.852 "method": "nvmf_set_crdt", 00:05:44.852 "params": { 00:05:44.852 "crdt1": 0, 00:05:44.852 "crdt2": 0, 00:05:44.852 "crdt3": 0 00:05:44.852 } 00:05:44.852 }, 00:05:44.852 { 00:05:44.852 "method": "nvmf_create_transport", 00:05:44.852 "params": { 00:05:44.852 "trtype": "TCP", 00:05:44.852 "max_queue_depth": 128, 00:05:44.852 "max_io_qpairs_per_ctrlr": 127, 00:05:44.852 "in_capsule_data_size": 4096, 00:05:44.852 "max_io_size": 131072, 00:05:44.852 "io_unit_size": 131072, 00:05:44.852 "max_aq_depth": 128, 00:05:44.852 "num_shared_buffers": 511, 00:05:44.852 "buf_cache_size": 4294967295, 00:05:44.852 "dif_insert_or_strip": false, 00:05:44.852 "zcopy": false, 00:05:44.852 "c2h_success": true, 00:05:44.852 "sock_priority": 0, 00:05:44.852 "abort_timeout_sec": 1, 00:05:44.852 "ack_timeout": 0, 00:05:44.852 "data_wr_pool_size": 0 00:05:44.852 } 00:05:44.852 } 00:05:44.852 ] 00:05:44.852 }, 00:05:44.852 { 00:05:44.852 "subsystem": "nbd", 00:05:44.852 "config": [] 00:05:44.852 }, 00:05:44.852 { 00:05:44.852 "subsystem": "ublk", 00:05:44.852 "config": [] 00:05:44.852 }, 00:05:44.852 { 00:05:44.852 "subsystem": "vhost_blk", 00:05:44.852 "config": [] 00:05:44.852 }, 00:05:44.852 { 00:05:44.852 "subsystem": "scsi", 00:05:44.852 "config": null 00:05:44.852 }, 00:05:44.852 { 00:05:44.852 "subsystem": "iscsi", 00:05:44.852 "config": [ 00:05:44.852 { 00:05:44.852 "method": "iscsi_set_options", 00:05:44.852 "params": { 00:05:44.852 "node_base": "iqn.2016-06.io.spdk", 00:05:44.852 "max_sessions": 128, 00:05:44.852 "max_connections_per_session": 2, 00:05:44.852 "max_queue_depth": 64, 00:05:44.852 "default_time2wait": 2, 00:05:44.852 "default_time2retain": 20, 00:05:44.852 "first_burst_length": 8192, 00:05:44.852 "immediate_data": true, 00:05:44.852 "allow_duplicated_isid": false, 00:05:44.852 "error_recovery_level": 0, 00:05:44.852 "nop_timeout": 60, 00:05:44.852 "nop_in_interval": 30, 00:05:44.852 "disable_chap": false, 00:05:44.852 "require_chap": false, 00:05:44.852 "mutual_chap": false, 00:05:44.852 "chap_group": 0, 00:05:44.852 "max_large_datain_per_connection": 64, 00:05:44.852 "max_r2t_per_connection": 4, 00:05:44.852 "pdu_pool_size": 36864, 00:05:44.852 "immediate_data_pool_size": 16384, 00:05:44.852 "data_out_pool_size": 2048 00:05:44.852 } 00:05:44.852 } 00:05:44.852 ] 00:05:44.852 }, 00:05:44.852 { 00:05:44.852 "subsystem": "vhost_scsi", 00:05:44.852 "config": [] 00:05:44.852 } 00:05:44.852 ] 00:05:44.852 } 00:05:44.852 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:44.852 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3655878 00:05:44.852 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3655878 ']' 00:05:44.852 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3655878 00:05:44.852 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:44.852 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:44.852 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3655878 00:05:44.852 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:44.852 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:44.852 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3655878' 00:05:44.852 killing process with pid 3655878 00:05:44.852 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3655878 00:05:44.852 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3655878 00:05:45.111 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3655939 00:05:45.111 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:45.111 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:50.384 19:53:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3655939 00:05:50.384 19:53:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3655939 ']' 00:05:50.384 19:53:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3655939 00:05:50.384 19:53:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:50.384 19:53:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:50.384 19:53:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3655939 00:05:50.384 19:53:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:50.384 19:53:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:50.384 19:53:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3655939' 00:05:50.384 killing process with pid 3655939 00:05:50.384 19:53:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3655939 00:05:50.384 19:53:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3655939 00:05:50.384 19:53:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:50.384 19:53:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:50.384 00:05:50.384 real 0m6.209s 00:05:50.384 user 0m5.888s 00:05:50.384 sys 0m0.604s 00:05:50.384 19:53:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.384 19:53:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:50.384 ************************************ 00:05:50.384 END TEST skip_rpc_with_json 00:05:50.384 ************************************ 00:05:50.384 19:53:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:50.384 19:53:38 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.384 19:53:38 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.384 19:53:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.384 ************************************ 00:05:50.384 START TEST skip_rpc_with_delay 00:05:50.384 ************************************ 00:05:50.384 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:50.384 19:53:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:50.384 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:50.384 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:50.384 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.384 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.384 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.644 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.644 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.644 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.644 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.644 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:50.644 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:50.644 [2024-07-13 19:53:38.056170] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:50.644 [2024-07-13 19:53:38.056244] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:50.644 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:50.644 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:50.644 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:50.644 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:50.644 00:05:50.644 real 0m0.027s 00:05:50.644 user 0m0.010s 00:05:50.644 sys 0m0.017s 00:05:50.644 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.644 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:50.644 ************************************ 00:05:50.644 END TEST skip_rpc_with_delay 00:05:50.644 ************************************ 00:05:50.644 19:53:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:50.644 19:53:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:50.644 19:53:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:50.644 19:53:38 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.644 19:53:38 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.644 19:53:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.644 ************************************ 00:05:50.644 START TEST exit_on_failed_rpc_init 00:05:50.644 ************************************ 00:05:50.644 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:50.644 19:53:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3657054 00:05:50.644 19:53:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3657054 00:05:50.644 19:53:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.644 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3657054 ']' 00:05:50.644 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.644 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:50.644 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.644 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:50.644 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:50.644 [2024-07-13 19:53:38.172930] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:50.644 [2024-07-13 19:53:38.173007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657054 ] 00:05:50.644 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.644 [2024-07-13 19:53:38.240421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.644 [2024-07-13 19:53:38.280095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.904 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:50.904 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:50.904 19:53:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.904 19:53:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:50.904 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:50.904 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:50.904 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.904 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.904 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.904 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.904 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.904 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.904 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.904 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:50.904 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:50.904 [2024-07-13 19:53:38.489940] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:50.904 [2024-07-13 19:53:38.490029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657060 ] 00:05:50.904 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.904 [2024-07-13 19:53:38.558400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.199 [2024-07-13 19:53:38.596872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.199 [2024-07-13 19:53:38.596951] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:51.199 [2024-07-13 19:53:38.596963] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:51.199 [2024-07-13 19:53:38.596971] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3657054 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3657054 ']' 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3657054 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3657054 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3657054' 00:05:51.199 killing process with pid 3657054 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3657054 00:05:51.199 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3657054 00:05:51.512 00:05:51.512 real 0m0.852s 00:05:51.512 user 0m0.864s 00:05:51.512 sys 0m0.399s 00:05:51.512 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.512 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:51.512 ************************************ 00:05:51.512 END TEST exit_on_failed_rpc_init 00:05:51.512 ************************************ 00:05:51.512 19:53:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:51.512 00:05:51.512 real 0m12.871s 00:05:51.512 user 0m12.042s 00:05:51.512 sys 0m1.600s 00:05:51.512 19:53:39 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.512 19:53:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.512 ************************************ 00:05:51.512 END TEST skip_rpc 00:05:51.512 ************************************ 00:05:51.512 19:53:39 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:51.512 19:53:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.512 19:53:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.512 19:53:39 -- common/autotest_common.sh@10 -- # set +x 00:05:51.512 ************************************ 00:05:51.512 START TEST rpc_client 00:05:51.512 ************************************ 00:05:51.512 19:53:39 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:51.772 * Looking for test storage... 00:05:51.772 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:51.772 19:53:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:51.772 OK 00:05:51.772 19:53:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:51.772 00:05:51.772 real 0m0.109s 00:05:51.772 user 0m0.045s 00:05:51.772 sys 0m0.071s 00:05:51.772 19:53:39 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.772 19:53:39 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:51.772 ************************************ 00:05:51.772 END TEST rpc_client 00:05:51.772 ************************************ 00:05:51.772 19:53:39 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:51.772 19:53:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.772 19:53:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.772 19:53:39 -- common/autotest_common.sh@10 -- # set +x 00:05:51.772 ************************************ 00:05:51.772 START TEST json_config 00:05:51.772 ************************************ 00:05:51.772 19:53:39 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:51.772 19:53:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:51.772 19:53:39 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.772 19:53:39 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.772 19:53:39 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.772 19:53:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.772 19:53:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.772 19:53:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.772 19:53:39 json_config -- paths/export.sh@5 -- # export PATH 00:05:51.772 19:53:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@47 -- # : 0 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:51.772 19:53:39 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:51.772 19:53:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:51.772 19:53:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:51.772 19:53:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:51.772 19:53:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:51.772 19:53:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:51.772 19:53:39 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:51.772 WARNING: No tests are enabled so not running JSON configuration tests 00:05:51.772 19:53:39 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:51.772 00:05:51.772 real 0m0.105s 00:05:51.772 user 0m0.055s 00:05:51.772 sys 0m0.051s 00:05:51.772 19:53:39 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.772 19:53:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.772 ************************************ 00:05:51.772 END TEST json_config 00:05:51.772 ************************************ 00:05:52.032 19:53:39 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:52.032 19:53:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.032 19:53:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.032 19:53:39 -- common/autotest_common.sh@10 -- # set +x 00:05:52.032 ************************************ 00:05:52.032 START TEST json_config_extra_key 00:05:52.032 ************************************ 00:05:52.032 19:53:39 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:52.032 19:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:52.032 19:53:39 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:52.032 19:53:39 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.032 19:53:39 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.032 19:53:39 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.032 19:53:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.032 19:53:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.033 19:53:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.033 19:53:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:52.033 19:53:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.033 19:53:39 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:52.033 19:53:39 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:52.033 19:53:39 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:52.033 19:53:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:52.033 19:53:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.033 19:53:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.033 19:53:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:52.033 19:53:39 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:52.033 19:53:39 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:52.033 19:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:52.033 19:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:52.033 19:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:52.033 19:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:52.033 19:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:52.033 19:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:52.033 19:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:52.033 19:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:52.033 19:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:52.033 19:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:52.033 19:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:52.033 INFO: launching applications... 00:05:52.033 19:53:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:52.033 19:53:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:52.033 19:53:39 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:52.033 19:53:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:52.033 19:53:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:52.033 19:53:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:52.033 19:53:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:52.033 19:53:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:52.033 19:53:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3657481 00:05:52.033 19:53:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:52.033 Waiting for target to run... 00:05:52.033 19:53:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3657481 /var/tmp/spdk_tgt.sock 00:05:52.033 19:53:39 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3657481 ']' 00:05:52.033 19:53:39 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:52.033 19:53:39 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:52.033 19:53:39 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:52.033 19:53:39 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:52.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:52.033 19:53:39 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:52.033 19:53:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:52.033 [2024-07-13 19:53:39.584110] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:52.033 [2024-07-13 19:53:39.584194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657481 ] 00:05:52.033 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.292 [2024-07-13 19:53:39.861657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.292 [2024-07-13 19:53:39.883409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.861 19:53:40 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:52.861 19:53:40 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:52.861 19:53:40 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:52.861 00:05:52.861 19:53:40 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:52.861 INFO: shutting down applications... 00:05:52.861 19:53:40 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:52.861 19:53:40 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:52.861 19:53:40 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:52.861 19:53:40 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3657481 ]] 00:05:52.861 19:53:40 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3657481 00:05:52.861 19:53:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:52.861 19:53:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.861 19:53:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3657481 00:05:52.861 19:53:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.430 19:53:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.430 19:53:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.430 19:53:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3657481 00:05:53.430 19:53:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:53.430 19:53:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:53.430 19:53:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:53.430 19:53:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:53.430 SPDK target shutdown done 00:05:53.430 19:53:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:53.430 Success 00:05:53.430 00:05:53.430 real 0m1.455s 00:05:53.430 user 0m1.195s 00:05:53.430 sys 0m0.392s 00:05:53.430 19:53:40 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.430 19:53:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:53.430 ************************************ 00:05:53.430 END TEST json_config_extra_key 00:05:53.430 ************************************ 00:05:53.430 19:53:40 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:53.430 19:53:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:53.430 19:53:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.430 19:53:40 -- common/autotest_common.sh@10 -- # set +x 00:05:53.430 ************************************ 00:05:53.430 START TEST alias_rpc 00:05:53.430 ************************************ 00:05:53.430 19:53:40 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:53.430 * Looking for test storage... 00:05:53.689 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:53.689 19:53:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:53.689 19:53:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3657797 00:05:53.689 19:53:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3657797 00:05:53.689 19:53:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.689 19:53:41 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3657797 ']' 00:05:53.689 19:53:41 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.689 19:53:41 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:53.689 19:53:41 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.689 19:53:41 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:53.689 19:53:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.689 [2024-07-13 19:53:41.118753] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:53.689 [2024-07-13 19:53:41.118847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657797 ] 00:05:53.689 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.689 [2024-07-13 19:53:41.186945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.689 [2024-07-13 19:53:41.225613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.948 19:53:41 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:53.948 19:53:41 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:53.948 19:53:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:53.948 19:53:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3657797 00:05:53.948 19:53:41 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3657797 ']' 00:05:53.948 19:53:41 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3657797 00:05:53.948 19:53:41 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:53.948 19:53:41 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:53.948 19:53:41 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3657797 00:05:54.206 19:53:41 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:54.206 19:53:41 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:54.206 19:53:41 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3657797' 00:05:54.206 killing process with pid 3657797 00:05:54.206 19:53:41 alias_rpc -- common/autotest_common.sh@965 -- # kill 3657797 00:05:54.206 19:53:41 alias_rpc -- common/autotest_common.sh@970 -- # wait 3657797 00:05:54.465 00:05:54.465 real 0m0.943s 00:05:54.465 user 0m0.917s 00:05:54.465 sys 0m0.399s 00:05:54.465 19:53:41 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.465 19:53:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.465 ************************************ 00:05:54.465 END TEST alias_rpc 00:05:54.465 ************************************ 00:05:54.465 19:53:41 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:54.465 19:53:41 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:54.465 19:53:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.465 19:53:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.465 19:53:41 -- common/autotest_common.sh@10 -- # set +x 00:05:54.465 ************************************ 00:05:54.465 START TEST spdkcli_tcp 00:05:54.465 ************************************ 00:05:54.465 19:53:42 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:54.465 * Looking for test storage... 00:05:54.465 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:54.465 19:53:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:54.465 19:53:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:54.465 19:53:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:54.465 19:53:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:54.465 19:53:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:54.465 19:53:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:54.465 19:53:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:54.465 19:53:42 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:54.465 19:53:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.465 19:53:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3657902 00:05:54.465 19:53:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3657902 00:05:54.465 19:53:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:54.465 19:53:42 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3657902 ']' 00:05:54.465 19:53:42 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.465 19:53:42 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.465 19:53:42 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.465 19:53:42 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.465 19:53:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.724 [2024-07-13 19:53:42.137520] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:54.724 [2024-07-13 19:53:42.137611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657902 ] 00:05:54.724 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.724 [2024-07-13 19:53:42.206633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.724 [2024-07-13 19:53:42.246194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.724 [2024-07-13 19:53:42.246197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.983 19:53:42 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:54.983 19:53:42 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:54.983 19:53:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3658073 00:05:54.983 19:53:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:54.983 19:53:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:54.983 [ 00:05:54.983 "spdk_get_version", 00:05:54.983 "rpc_get_methods", 00:05:54.983 "trace_get_info", 00:05:54.983 "trace_get_tpoint_group_mask", 00:05:54.983 "trace_disable_tpoint_group", 00:05:54.983 "trace_enable_tpoint_group", 00:05:54.983 "trace_clear_tpoint_mask", 00:05:54.983 "trace_set_tpoint_mask", 00:05:54.983 "vfu_tgt_set_base_path", 00:05:54.983 "framework_get_pci_devices", 00:05:54.983 "framework_get_config", 00:05:54.983 "framework_get_subsystems", 00:05:54.983 "keyring_get_keys", 00:05:54.983 "iobuf_get_stats", 00:05:54.983 "iobuf_set_options", 00:05:54.983 "sock_get_default_impl", 00:05:54.983 "sock_set_default_impl", 00:05:54.983 "sock_impl_set_options", 00:05:54.983 "sock_impl_get_options", 00:05:54.983 "vmd_rescan", 00:05:54.983 "vmd_remove_device", 00:05:54.983 "vmd_enable", 00:05:54.983 "accel_get_stats", 00:05:54.983 "accel_set_options", 00:05:54.983 "accel_set_driver", 00:05:54.983 "accel_crypto_key_destroy", 00:05:54.983 "accel_crypto_keys_get", 00:05:54.983 "accel_crypto_key_create", 00:05:54.983 "accel_assign_opc", 00:05:54.983 "accel_get_module_info", 00:05:54.984 "accel_get_opc_assignments", 00:05:54.984 "notify_get_notifications", 00:05:54.984 "notify_get_types", 00:05:54.984 "bdev_get_histogram", 00:05:54.984 "bdev_enable_histogram", 00:05:54.984 "bdev_set_qos_limit", 00:05:54.984 "bdev_set_qd_sampling_period", 00:05:54.984 "bdev_get_bdevs", 00:05:54.984 "bdev_reset_iostat", 00:05:54.984 "bdev_get_iostat", 00:05:54.984 "bdev_examine", 00:05:54.984 "bdev_wait_for_examine", 00:05:54.984 "bdev_set_options", 00:05:54.984 "scsi_get_devices", 00:05:54.984 "thread_set_cpumask", 00:05:54.984 "framework_get_scheduler", 00:05:54.984 "framework_set_scheduler", 00:05:54.984 "framework_get_reactors", 00:05:54.984 "thread_get_io_channels", 00:05:54.984 "thread_get_pollers", 00:05:54.984 "thread_get_stats", 00:05:54.984 "framework_monitor_context_switch", 00:05:54.984 "spdk_kill_instance", 00:05:54.984 "log_enable_timestamps", 00:05:54.984 "log_get_flags", 00:05:54.984 "log_clear_flag", 00:05:54.984 "log_set_flag", 00:05:54.984 "log_get_level", 00:05:54.984 "log_set_level", 00:05:54.984 "log_get_print_level", 00:05:54.984 "log_set_print_level", 00:05:54.984 "framework_enable_cpumask_locks", 00:05:54.984 "framework_disable_cpumask_locks", 00:05:54.984 "framework_wait_init", 00:05:54.984 "framework_start_init", 00:05:54.984 "virtio_blk_create_transport", 00:05:54.984 "virtio_blk_get_transports", 00:05:54.984 "vhost_controller_set_coalescing", 00:05:54.984 "vhost_get_controllers", 00:05:54.984 "vhost_delete_controller", 00:05:54.984 "vhost_create_blk_controller", 00:05:54.984 "vhost_scsi_controller_remove_target", 00:05:54.984 "vhost_scsi_controller_add_target", 00:05:54.984 "vhost_start_scsi_controller", 00:05:54.984 "vhost_create_scsi_controller", 00:05:54.984 "ublk_recover_disk", 00:05:54.984 "ublk_get_disks", 00:05:54.984 "ublk_stop_disk", 00:05:54.984 "ublk_start_disk", 00:05:54.984 "ublk_destroy_target", 00:05:54.984 "ublk_create_target", 00:05:54.984 "nbd_get_disks", 00:05:54.984 "nbd_stop_disk", 00:05:54.984 "nbd_start_disk", 00:05:54.984 "env_dpdk_get_mem_stats", 00:05:54.984 "nvmf_stop_mdns_prr", 00:05:54.984 "nvmf_publish_mdns_prr", 00:05:54.984 "nvmf_subsystem_get_listeners", 00:05:54.984 "nvmf_subsystem_get_qpairs", 00:05:54.984 "nvmf_subsystem_get_controllers", 00:05:54.984 "nvmf_get_stats", 00:05:54.984 "nvmf_get_transports", 00:05:54.984 "nvmf_create_transport", 00:05:54.984 "nvmf_get_targets", 00:05:54.984 "nvmf_delete_target", 00:05:54.984 "nvmf_create_target", 00:05:54.984 "nvmf_subsystem_allow_any_host", 00:05:54.984 "nvmf_subsystem_remove_host", 00:05:54.984 "nvmf_subsystem_add_host", 00:05:54.984 "nvmf_ns_remove_host", 00:05:54.984 "nvmf_ns_add_host", 00:05:54.984 "nvmf_subsystem_remove_ns", 00:05:54.984 "nvmf_subsystem_add_ns", 00:05:54.984 "nvmf_subsystem_listener_set_ana_state", 00:05:54.984 "nvmf_discovery_get_referrals", 00:05:54.984 "nvmf_discovery_remove_referral", 00:05:54.984 "nvmf_discovery_add_referral", 00:05:54.984 "nvmf_subsystem_remove_listener", 00:05:54.984 "nvmf_subsystem_add_listener", 00:05:54.984 "nvmf_delete_subsystem", 00:05:54.984 "nvmf_create_subsystem", 00:05:54.984 "nvmf_get_subsystems", 00:05:54.984 "nvmf_set_crdt", 00:05:54.984 "nvmf_set_config", 00:05:54.984 "nvmf_set_max_subsystems", 00:05:54.984 "iscsi_get_histogram", 00:05:54.984 "iscsi_enable_histogram", 00:05:54.984 "iscsi_set_options", 00:05:54.984 "iscsi_get_auth_groups", 00:05:54.984 "iscsi_auth_group_remove_secret", 00:05:54.984 "iscsi_auth_group_add_secret", 00:05:54.984 "iscsi_delete_auth_group", 00:05:54.984 "iscsi_create_auth_group", 00:05:54.984 "iscsi_set_discovery_auth", 00:05:54.984 "iscsi_get_options", 00:05:54.984 "iscsi_target_node_request_logout", 00:05:54.984 "iscsi_target_node_set_redirect", 00:05:54.984 "iscsi_target_node_set_auth", 00:05:54.984 "iscsi_target_node_add_lun", 00:05:54.984 "iscsi_get_stats", 00:05:54.984 "iscsi_get_connections", 00:05:54.984 "iscsi_portal_group_set_auth", 00:05:54.984 "iscsi_start_portal_group", 00:05:54.984 "iscsi_delete_portal_group", 00:05:54.984 "iscsi_create_portal_group", 00:05:54.984 "iscsi_get_portal_groups", 00:05:54.984 "iscsi_delete_target_node", 00:05:54.984 "iscsi_target_node_remove_pg_ig_maps", 00:05:54.984 "iscsi_target_node_add_pg_ig_maps", 00:05:54.984 "iscsi_create_target_node", 00:05:54.984 "iscsi_get_target_nodes", 00:05:54.984 "iscsi_delete_initiator_group", 00:05:54.984 "iscsi_initiator_group_remove_initiators", 00:05:54.984 "iscsi_initiator_group_add_initiators", 00:05:54.984 "iscsi_create_initiator_group", 00:05:54.984 "iscsi_get_initiator_groups", 00:05:54.984 "keyring_linux_set_options", 00:05:54.984 "keyring_file_remove_key", 00:05:54.984 "keyring_file_add_key", 00:05:54.984 "vfu_virtio_create_scsi_endpoint", 00:05:54.984 "vfu_virtio_scsi_remove_target", 00:05:54.984 "vfu_virtio_scsi_add_target", 00:05:54.984 "vfu_virtio_create_blk_endpoint", 00:05:54.984 "vfu_virtio_delete_endpoint", 00:05:54.984 "iaa_scan_accel_module", 00:05:54.984 "dsa_scan_accel_module", 00:05:54.984 "ioat_scan_accel_module", 00:05:54.984 "accel_error_inject_error", 00:05:54.984 "bdev_iscsi_delete", 00:05:54.984 "bdev_iscsi_create", 00:05:54.984 "bdev_iscsi_set_options", 00:05:54.984 "bdev_virtio_attach_controller", 00:05:54.984 "bdev_virtio_scsi_get_devices", 00:05:54.984 "bdev_virtio_detach_controller", 00:05:54.984 "bdev_virtio_blk_set_hotplug", 00:05:54.984 "bdev_ftl_set_property", 00:05:54.984 "bdev_ftl_get_properties", 00:05:54.984 "bdev_ftl_get_stats", 00:05:54.984 "bdev_ftl_unmap", 00:05:54.984 "bdev_ftl_unload", 00:05:54.984 "bdev_ftl_delete", 00:05:54.984 "bdev_ftl_load", 00:05:54.984 "bdev_ftl_create", 00:05:54.984 "bdev_aio_delete", 00:05:54.984 "bdev_aio_rescan", 00:05:54.984 "bdev_aio_create", 00:05:54.984 "blobfs_create", 00:05:54.984 "blobfs_detect", 00:05:54.984 "blobfs_set_cache_size", 00:05:54.984 "bdev_zone_block_delete", 00:05:54.984 "bdev_zone_block_create", 00:05:54.984 "bdev_delay_delete", 00:05:54.984 "bdev_delay_create", 00:05:54.984 "bdev_delay_update_latency", 00:05:54.984 "bdev_split_delete", 00:05:54.984 "bdev_split_create", 00:05:54.984 "bdev_error_inject_error", 00:05:54.984 "bdev_error_delete", 00:05:54.984 "bdev_error_create", 00:05:54.984 "bdev_raid_set_options", 00:05:54.984 "bdev_raid_remove_base_bdev", 00:05:54.984 "bdev_raid_add_base_bdev", 00:05:54.984 "bdev_raid_delete", 00:05:54.984 "bdev_raid_create", 00:05:54.984 "bdev_raid_get_bdevs", 00:05:54.984 "bdev_lvol_set_parent_bdev", 00:05:54.984 "bdev_lvol_set_parent", 00:05:54.984 "bdev_lvol_check_shallow_copy", 00:05:54.984 "bdev_lvol_start_shallow_copy", 00:05:54.984 "bdev_lvol_grow_lvstore", 00:05:54.984 "bdev_lvol_get_lvols", 00:05:54.984 "bdev_lvol_get_lvstores", 00:05:54.984 "bdev_lvol_delete", 00:05:54.984 "bdev_lvol_set_read_only", 00:05:54.984 "bdev_lvol_resize", 00:05:54.984 "bdev_lvol_decouple_parent", 00:05:54.984 "bdev_lvol_inflate", 00:05:54.984 "bdev_lvol_rename", 00:05:54.984 "bdev_lvol_clone_bdev", 00:05:54.984 "bdev_lvol_clone", 00:05:54.984 "bdev_lvol_snapshot", 00:05:54.984 "bdev_lvol_create", 00:05:54.984 "bdev_lvol_delete_lvstore", 00:05:54.984 "bdev_lvol_rename_lvstore", 00:05:54.984 "bdev_lvol_create_lvstore", 00:05:54.984 "bdev_passthru_delete", 00:05:54.984 "bdev_passthru_create", 00:05:54.984 "bdev_nvme_cuse_unregister", 00:05:54.984 "bdev_nvme_cuse_register", 00:05:54.984 "bdev_opal_new_user", 00:05:54.984 "bdev_opal_set_lock_state", 00:05:54.984 "bdev_opal_delete", 00:05:54.984 "bdev_opal_get_info", 00:05:54.984 "bdev_opal_create", 00:05:54.984 "bdev_nvme_opal_revert", 00:05:54.984 "bdev_nvme_opal_init", 00:05:54.984 "bdev_nvme_send_cmd", 00:05:54.984 "bdev_nvme_get_path_iostat", 00:05:54.984 "bdev_nvme_get_mdns_discovery_info", 00:05:54.984 "bdev_nvme_stop_mdns_discovery", 00:05:54.984 "bdev_nvme_start_mdns_discovery", 00:05:54.984 "bdev_nvme_set_multipath_policy", 00:05:54.984 "bdev_nvme_set_preferred_path", 00:05:54.984 "bdev_nvme_get_io_paths", 00:05:54.984 "bdev_nvme_remove_error_injection", 00:05:54.984 "bdev_nvme_add_error_injection", 00:05:54.984 "bdev_nvme_get_discovery_info", 00:05:54.984 "bdev_nvme_stop_discovery", 00:05:54.984 "bdev_nvme_start_discovery", 00:05:54.984 "bdev_nvme_get_controller_health_info", 00:05:54.984 "bdev_nvme_disable_controller", 00:05:54.984 "bdev_nvme_enable_controller", 00:05:54.984 "bdev_nvme_reset_controller", 00:05:54.984 "bdev_nvme_get_transport_statistics", 00:05:54.984 "bdev_nvme_apply_firmware", 00:05:54.984 "bdev_nvme_detach_controller", 00:05:54.984 "bdev_nvme_get_controllers", 00:05:54.984 "bdev_nvme_attach_controller", 00:05:54.984 "bdev_nvme_set_hotplug", 00:05:54.984 "bdev_nvme_set_options", 00:05:54.984 "bdev_null_resize", 00:05:54.984 "bdev_null_delete", 00:05:54.984 "bdev_null_create", 00:05:54.984 "bdev_malloc_delete", 00:05:54.984 "bdev_malloc_create" 00:05:54.984 ] 00:05:54.984 19:53:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:54.984 19:53:42 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:54.984 19:53:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.244 19:53:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:55.244 19:53:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3657902 00:05:55.244 19:53:42 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3657902 ']' 00:05:55.244 19:53:42 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3657902 00:05:55.244 19:53:42 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:55.244 19:53:42 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:55.244 19:53:42 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3657902 00:05:55.244 19:53:42 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:55.244 19:53:42 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:55.244 19:53:42 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3657902' 00:05:55.244 killing process with pid 3657902 00:05:55.244 19:53:42 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3657902 00:05:55.244 19:53:42 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3657902 00:05:55.504 00:05:55.504 real 0m0.984s 00:05:55.504 user 0m1.632s 00:05:55.504 sys 0m0.469s 00:05:55.504 19:53:42 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.504 19:53:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.504 ************************************ 00:05:55.504 END TEST spdkcli_tcp 00:05:55.504 ************************************ 00:05:55.504 19:53:43 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:55.504 19:53:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.504 19:53:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.504 19:53:43 -- common/autotest_common.sh@10 -- # set +x 00:05:55.504 ************************************ 00:05:55.504 START TEST dpdk_mem_utility 00:05:55.504 ************************************ 00:05:55.504 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:55.504 * Looking for test storage... 00:05:55.504 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:55.504 19:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:55.504 19:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3658194 00:05:55.504 19:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3658194 00:05:55.504 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3658194 ']' 00:05:55.504 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.504 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:55.504 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.504 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:55.504 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:55.504 19:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.763 [2024-07-13 19:53:43.179404] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:55.763 [2024-07-13 19:53:43.179494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658194 ] 00:05:55.763 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.763 [2024-07-13 19:53:43.246647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.763 [2024-07-13 19:53:43.285664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.029 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:56.029 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:56.029 19:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:56.029 19:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:56.029 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.029 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:56.029 { 00:05:56.029 "filename": "/tmp/spdk_mem_dump.txt" 00:05:56.029 } 00:05:56.029 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.029 19:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:56.029 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:56.029 1 heaps totaling size 814.000000 MiB 00:05:56.029 size: 814.000000 MiB heap id: 0 00:05:56.029 end heaps---------- 00:05:56.029 8 mempools totaling size 598.116089 MiB 00:05:56.029 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:56.029 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:56.029 size: 84.521057 MiB name: bdev_io_3658194 00:05:56.029 size: 51.011292 MiB name: evtpool_3658194 00:05:56.029 size: 50.003479 MiB name: msgpool_3658194 00:05:56.029 size: 21.763794 MiB name: PDU_Pool 00:05:56.029 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:56.029 size: 0.026123 MiB name: Session_Pool 00:05:56.029 end mempools------- 00:05:56.029 6 memzones totaling size 4.142822 MiB 00:05:56.029 size: 1.000366 MiB name: RG_ring_0_3658194 00:05:56.029 size: 1.000366 MiB name: RG_ring_1_3658194 00:05:56.029 size: 1.000366 MiB name: RG_ring_4_3658194 00:05:56.029 size: 1.000366 MiB name: RG_ring_5_3658194 00:05:56.029 size: 0.125366 MiB name: RG_ring_2_3658194 00:05:56.029 size: 0.015991 MiB name: RG_ring_3_3658194 00:05:56.029 end memzones------- 00:05:56.029 19:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:56.029 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:56.029 list of free elements. size: 12.519348 MiB 00:05:56.029 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:56.029 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:56.029 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:56.029 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:56.029 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:56.029 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:56.029 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:56.029 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:56.029 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:56.029 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:56.029 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:56.029 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:56.029 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:56.029 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:56.029 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:56.029 list of standard malloc elements. size: 199.218079 MiB 00:05:56.029 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:56.029 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:56.029 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:56.029 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:56.029 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:56.029 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:56.029 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:56.029 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:56.029 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:56.029 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:56.029 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:56.029 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:56.029 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:56.029 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:56.029 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:56.029 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:56.029 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:56.029 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:56.029 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:56.029 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:56.029 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:56.029 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:56.029 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:56.029 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:56.029 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:56.029 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:56.029 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:56.029 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:56.029 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:56.029 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:56.029 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:56.029 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:56.029 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:56.029 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:56.029 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:56.029 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:56.029 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:56.029 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:56.029 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:56.029 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:56.029 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:56.029 list of memzone associated elements. size: 602.262573 MiB 00:05:56.029 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:56.029 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:56.029 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:56.029 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:56.029 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:56.029 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3658194_0 00:05:56.029 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:56.029 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3658194_0 00:05:56.029 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:56.029 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3658194_0 00:05:56.029 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:56.029 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:56.029 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:56.029 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:56.029 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:56.029 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3658194 00:05:56.029 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:56.029 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3658194 00:05:56.029 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:56.029 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3658194 00:05:56.029 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:56.029 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:56.029 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:56.029 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:56.029 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:56.029 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:56.029 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:56.029 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:56.029 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:56.029 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3658194 00:05:56.029 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:56.029 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3658194 00:05:56.029 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:56.029 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3658194 00:05:56.029 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:56.029 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3658194 00:05:56.029 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:56.029 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3658194 00:05:56.029 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:56.029 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:56.029 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:56.029 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:56.029 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:56.029 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:56.029 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:56.029 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3658194 00:05:56.029 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:56.029 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:56.029 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:56.029 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:56.029 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:56.029 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3658194 00:05:56.029 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:56.029 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:56.029 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:56.029 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3658194 00:05:56.029 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:56.029 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3658194 00:05:56.029 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:56.029 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:56.029 19:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:56.029 19:53:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3658194 00:05:56.029 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3658194 ']' 00:05:56.029 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3658194 00:05:56.029 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:56.029 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:56.029 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3658194 00:05:56.029 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:56.029 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:56.029 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3658194' 00:05:56.029 killing process with pid 3658194 00:05:56.029 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3658194 00:05:56.029 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3658194 00:05:56.288 00:05:56.288 real 0m0.822s 00:05:56.288 user 0m0.739s 00:05:56.288 sys 0m0.377s 00:05:56.288 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.288 19:53:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:56.288 ************************************ 00:05:56.288 END TEST dpdk_mem_utility 00:05:56.288 ************************************ 00:05:56.288 19:53:43 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:56.288 19:53:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.288 19:53:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.288 19:53:43 -- common/autotest_common.sh@10 -- # set +x 00:05:56.546 ************************************ 00:05:56.546 START TEST event 00:05:56.546 ************************************ 00:05:56.546 19:53:43 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:56.546 * Looking for test storage... 00:05:56.546 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:56.546 19:53:44 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:56.546 19:53:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:56.546 19:53:44 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:56.546 19:53:44 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:56.546 19:53:44 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.546 19:53:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.546 ************************************ 00:05:56.546 START TEST event_perf 00:05:56.546 ************************************ 00:05:56.546 19:53:44 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:56.546 Running I/O for 1 seconds...[2024-07-13 19:53:44.131233] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:56.546 [2024-07-13 19:53:44.131337] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658507 ] 00:05:56.546 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.546 [2024-07-13 19:53:44.201494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.805 [2024-07-13 19:53:44.243348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.805 [2024-07-13 19:53:44.243449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.805 [2024-07-13 19:53:44.243539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.805 [2024-07-13 19:53:44.243542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.742 Running I/O for 1 seconds... 00:05:57.742 lcore 0: 203970 00:05:57.742 lcore 1: 203970 00:05:57.742 lcore 2: 203969 00:05:57.742 lcore 3: 203970 00:05:57.742 done. 00:05:57.742 00:05:57.742 real 0m1.184s 00:05:57.742 user 0m4.087s 00:05:57.742 sys 0m0.096s 00:05:57.742 19:53:45 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.742 19:53:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.742 ************************************ 00:05:57.742 END TEST event_perf 00:05:57.742 ************************************ 00:05:57.742 19:53:45 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:57.742 19:53:45 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:57.742 19:53:45 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.742 19:53:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.742 ************************************ 00:05:57.742 START TEST event_reactor 00:05:57.742 ************************************ 00:05:57.742 19:53:45 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:57.742 [2024-07-13 19:53:45.395706] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:57.742 [2024-07-13 19:53:45.395828] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658697 ] 00:05:58.001 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.001 [2024-07-13 19:53:45.465503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.001 [2024-07-13 19:53:45.504047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.936 test_start 00:05:58.936 oneshot 00:05:58.936 tick 100 00:05:58.936 tick 100 00:05:58.936 tick 250 00:05:58.936 tick 100 00:05:58.936 tick 100 00:05:58.936 tick 100 00:05:58.936 tick 250 00:05:58.936 tick 500 00:05:58.936 tick 100 00:05:58.936 tick 100 00:05:58.936 tick 250 00:05:58.936 tick 100 00:05:58.936 tick 100 00:05:58.936 test_end 00:05:58.936 00:05:58.936 real 0m1.179s 00:05:58.936 user 0m1.082s 00:05:58.936 sys 0m0.094s 00:05:58.936 19:53:46 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.936 19:53:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:58.936 ************************************ 00:05:58.936 END TEST event_reactor 00:05:58.936 ************************************ 00:05:59.196 19:53:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.196 19:53:46 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:59.196 19:53:46 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.196 19:53:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.196 ************************************ 00:05:59.196 START TEST event_reactor_perf 00:05:59.196 ************************************ 00:05:59.196 19:53:46 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.196 [2024-07-13 19:53:46.661866] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:59.196 [2024-07-13 19:53:46.661948] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658862 ] 00:05:59.196 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.196 [2024-07-13 19:53:46.735644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.196 [2024-07-13 19:53:46.775556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.573 test_start 00:06:00.573 test_end 00:06:00.573 Performance: 987652 events per second 00:06:00.573 00:06:00.573 real 0m1.184s 00:06:00.573 user 0m1.087s 00:06:00.573 sys 0m0.093s 00:06:00.573 19:53:47 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.573 19:53:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.573 ************************************ 00:06:00.573 END TEST event_reactor_perf 00:06:00.573 ************************************ 00:06:00.573 19:53:47 event -- event/event.sh@49 -- # uname -s 00:06:00.573 19:53:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:00.573 19:53:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:00.573 19:53:47 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.573 19:53:47 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.573 19:53:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.573 ************************************ 00:06:00.573 START TEST event_scheduler 00:06:00.573 ************************************ 00:06:00.573 19:53:47 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:00.573 * Looking for test storage... 00:06:00.573 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:00.573 19:53:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:00.573 19:53:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3659147 00:06:00.573 19:53:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.573 19:53:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3659147 00:06:00.573 19:53:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:00.573 19:53:48 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3659147 ']' 00:06:00.573 19:53:48 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.573 19:53:48 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:00.573 19:53:48 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.573 19:53:48 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:00.573 19:53:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.573 [2024-07-13 19:53:48.041861] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:00.573 [2024-07-13 19:53:48.041922] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3659147 ] 00:06:00.573 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.573 [2024-07-13 19:53:48.104345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.573 [2024-07-13 19:53:48.149091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.573 [2024-07-13 19:53:48.149174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.573 [2024-07-13 19:53:48.149274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.573 [2024-07-13 19:53:48.149276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.573 19:53:48 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:00.573 19:53:48 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:00.573 19:53:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:00.573 19:53:48 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.573 19:53:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.573 POWER: Env isn't set yet! 00:06:00.573 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:00.573 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.573 POWER: Cannot set governor of lcore 0 to userspace 00:06:00.573 POWER: Attempting to initialise PSTAT power management... 00:06:00.573 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:00.573 POWER: Initialized successfully for lcore 0 power management 00:06:00.833 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:00.833 POWER: Initialized successfully for lcore 1 power management 00:06:00.833 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:00.833 POWER: Initialized successfully for lcore 2 power management 00:06:00.833 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:00.833 POWER: Initialized successfully for lcore 3 power management 00:06:00.833 [2024-07-13 19:53:48.257652] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:00.833 [2024-07-13 19:53:48.257667] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:00.833 [2024-07-13 19:53:48.257678] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:00.833 19:53:48 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.833 19:53:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:00.833 19:53:48 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.833 19:53:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.833 [2024-07-13 19:53:48.319942] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:00.833 19:53:48 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.833 19:53:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:00.833 19:53:48 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.833 19:53:48 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.833 19:53:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.833 ************************************ 00:06:00.833 START TEST scheduler_create_thread 00:06:00.833 ************************************ 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.833 2 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.833 3 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.833 4 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.833 5 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.833 6 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.833 7 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.833 8 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.833 9 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.833 10 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.833 19:53:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.770 19:53:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.770 19:53:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:01.770 19:53:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.770 19:53:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.149 19:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.149 19:53:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:03.149 19:53:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:03.149 19:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.149 19:53:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.086 19:53:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.086 00:06:04.086 real 0m3.381s 00:06:04.086 user 0m0.023s 00:06:04.086 sys 0m0.009s 00:06:04.086 19:53:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.086 19:53:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.086 ************************************ 00:06:04.086 END TEST scheduler_create_thread 00:06:04.086 ************************************ 00:06:04.345 19:53:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:04.345 19:53:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3659147 00:06:04.345 19:53:51 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3659147 ']' 00:06:04.345 19:53:51 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3659147 00:06:04.345 19:53:51 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:04.345 19:53:51 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:04.345 19:53:51 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3659147 00:06:04.345 19:53:51 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:04.345 19:53:51 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:04.345 19:53:51 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3659147' 00:06:04.345 killing process with pid 3659147 00:06:04.345 19:53:51 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3659147 00:06:04.345 19:53:51 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3659147 00:06:04.604 [2024-07-13 19:53:52.124053] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:04.604 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:04.604 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:04.604 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:04.604 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:04.604 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:04.604 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:04.604 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:04.604 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:04.863 00:06:04.863 real 0m4.416s 00:06:04.863 user 0m7.880s 00:06:04.863 sys 0m0.350s 00:06:04.863 19:53:52 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.863 19:53:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.863 ************************************ 00:06:04.863 END TEST event_scheduler 00:06:04.863 ************************************ 00:06:04.863 19:53:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:04.863 19:53:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:04.863 19:53:52 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.863 19:53:52 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.863 19:53:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.863 ************************************ 00:06:04.863 START TEST app_repeat 00:06:04.863 ************************************ 00:06:04.863 19:53:52 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:04.863 19:53:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.863 19:53:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.863 19:53:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:04.863 19:53:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.863 19:53:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:04.863 19:53:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:04.863 19:53:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:04.863 19:53:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3659998 00:06:04.863 19:53:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.863 19:53:52 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:04.863 19:53:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3659998' 00:06:04.863 Process app_repeat pid: 3659998 00:06:04.863 19:53:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.863 19:53:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:04.863 spdk_app_start Round 0 00:06:04.863 19:53:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3659998 /var/tmp/spdk-nbd.sock 00:06:04.863 19:53:52 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3659998 ']' 00:06:04.863 19:53:52 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.863 19:53:52 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.863 19:53:52 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.863 19:53:52 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.863 19:53:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.863 [2024-07-13 19:53:52.456588] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:04.863 [2024-07-13 19:53:52.456676] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3659998 ] 00:06:04.864 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.122 [2024-07-13 19:53:52.527195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.122 [2024-07-13 19:53:52.567597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.122 [2024-07-13 19:53:52.567601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.123 19:53:52 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.123 19:53:52 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:05.123 19:53:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.399 Malloc0 00:06:05.399 19:53:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.399 Malloc1 00:06:05.399 19:53:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.399 19:53:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.399 19:53:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.399 19:53:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.399 19:53:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.399 19:53:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.400 19:53:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.400 19:53:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.400 19:53:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.400 19:53:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.400 19:53:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.400 19:53:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.400 19:53:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.400 19:53:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.400 19:53:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.400 19:53:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.659 /dev/nbd0 00:06:05.659 19:53:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.659 19:53:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.659 19:53:53 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:05.659 19:53:53 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:05.659 19:53:53 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:05.659 19:53:53 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:05.659 19:53:53 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:05.659 19:53:53 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:05.659 19:53:53 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:05.659 19:53:53 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:05.659 19:53:53 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.659 1+0 records in 00:06:05.659 1+0 records out 00:06:05.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239995 s, 17.1 MB/s 00:06:05.659 19:53:53 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:05.659 19:53:53 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:05.659 19:53:53 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:05.659 19:53:53 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:05.659 19:53:53 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:05.659 19:53:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.659 19:53:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.659 19:53:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.917 /dev/nbd1 00:06:05.917 19:53:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.917 19:53:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.917 19:53:53 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:05.917 19:53:53 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:05.917 19:53:53 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:05.917 19:53:53 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:05.917 19:53:53 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:05.917 19:53:53 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:05.917 19:53:53 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:05.917 19:53:53 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:05.917 19:53:53 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.917 1+0 records in 00:06:05.917 1+0 records out 00:06:05.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017888 s, 22.9 MB/s 00:06:05.917 19:53:53 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:05.917 19:53:53 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:05.917 19:53:53 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:05.917 19:53:53 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:05.917 19:53:53 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:05.918 19:53:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.918 19:53:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.918 19:53:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.918 19:53:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.918 19:53:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.176 19:53:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.176 { 00:06:06.176 "nbd_device": "/dev/nbd0", 00:06:06.176 "bdev_name": "Malloc0" 00:06:06.176 }, 00:06:06.176 { 00:06:06.176 "nbd_device": "/dev/nbd1", 00:06:06.176 "bdev_name": "Malloc1" 00:06:06.176 } 00:06:06.177 ]' 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.177 { 00:06:06.177 "nbd_device": "/dev/nbd0", 00:06:06.177 "bdev_name": "Malloc0" 00:06:06.177 }, 00:06:06.177 { 00:06:06.177 "nbd_device": "/dev/nbd1", 00:06:06.177 "bdev_name": "Malloc1" 00:06:06.177 } 00:06:06.177 ]' 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.177 /dev/nbd1' 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.177 /dev/nbd1' 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.177 256+0 records in 00:06:06.177 256+0 records out 00:06:06.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114613 s, 91.5 MB/s 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.177 256+0 records in 00:06:06.177 256+0 records out 00:06:06.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210715 s, 49.8 MB/s 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.177 256+0 records in 00:06:06.177 256+0 records out 00:06:06.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022177 s, 47.3 MB/s 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.177 19:53:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.436 19:53:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.436 19:53:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.436 19:53:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.436 19:53:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.436 19:53:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.436 19:53:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.436 19:53:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.436 19:53:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.436 19:53:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.436 19:53:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.694 19:53:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.694 19:53:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.953 19:53:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.211 [2024-07-13 19:53:54.695723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.211 [2024-07-13 19:53:54.731496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.211 [2024-07-13 19:53:54.731499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.211 [2024-07-13 19:53:54.771439] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.211 [2024-07-13 19:53:54.771489] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.494 19:53:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.494 19:53:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:10.494 spdk_app_start Round 1 00:06:10.494 19:53:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3659998 /var/tmp/spdk-nbd.sock 00:06:10.494 19:53:57 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3659998 ']' 00:06:10.494 19:53:57 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.494 19:53:57 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.494 19:53:57 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.494 19:53:57 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.494 19:53:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.494 19:53:57 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.494 19:53:57 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:10.494 19:53:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.494 Malloc0 00:06:10.494 19:53:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.494 Malloc1 00:06:10.494 19:53:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.494 19:53:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.494 19:53:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.494 19:53:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.494 19:53:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.494 19:53:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.494 19:53:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.494 19:53:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.494 19:53:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.494 19:53:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.494 19:53:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.494 19:53:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.494 19:53:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.494 19:53:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.494 19:53:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.494 19:53:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.752 /dev/nbd0 00:06:10.752 19:53:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.752 19:53:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.752 19:53:58 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:10.752 19:53:58 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:10.752 19:53:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:10.752 19:53:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:10.752 19:53:58 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:10.752 19:53:58 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:10.752 19:53:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:10.752 19:53:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:10.752 19:53:58 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.752 1+0 records in 00:06:10.752 1+0 records out 00:06:10.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222685 s, 18.4 MB/s 00:06:10.752 19:53:58 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:10.752 19:53:58 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:10.752 19:53:58 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:10.752 19:53:58 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:10.752 19:53:58 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:10.752 19:53:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.752 19:53:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.752 19:53:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.010 /dev/nbd1 00:06:11.010 19:53:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.010 19:53:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.010 19:53:58 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:11.010 19:53:58 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:11.010 19:53:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:11.010 19:53:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:11.010 19:53:58 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:11.010 19:53:58 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:11.010 19:53:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:11.010 19:53:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:11.010 19:53:58 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.010 1+0 records in 00:06:11.011 1+0 records out 00:06:11.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251773 s, 16.3 MB/s 00:06:11.011 19:53:58 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:11.011 19:53:58 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:11.011 19:53:58 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:11.011 19:53:58 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:11.011 19:53:58 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:11.011 19:53:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.011 19:53:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.011 19:53:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.011 19:53:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.011 19:53:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.011 19:53:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.011 { 00:06:11.011 "nbd_device": "/dev/nbd0", 00:06:11.011 "bdev_name": "Malloc0" 00:06:11.011 }, 00:06:11.011 { 00:06:11.011 "nbd_device": "/dev/nbd1", 00:06:11.011 "bdev_name": "Malloc1" 00:06:11.011 } 00:06:11.011 ]' 00:06:11.011 19:53:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.011 { 00:06:11.011 "nbd_device": "/dev/nbd0", 00:06:11.011 "bdev_name": "Malloc0" 00:06:11.011 }, 00:06:11.011 { 00:06:11.011 "nbd_device": "/dev/nbd1", 00:06:11.011 "bdev_name": "Malloc1" 00:06:11.011 } 00:06:11.011 ]' 00:06:11.011 19:53:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.270 /dev/nbd1' 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.270 /dev/nbd1' 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.270 256+0 records in 00:06:11.270 256+0 records out 00:06:11.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011021 s, 95.1 MB/s 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.270 256+0 records in 00:06:11.270 256+0 records out 00:06:11.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02103 s, 49.9 MB/s 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.270 256+0 records in 00:06:11.270 256+0 records out 00:06:11.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221511 s, 47.3 MB/s 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.270 19:53:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.528 19:53:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.528 19:53:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.528 19:53:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.528 19:53:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.528 19:53:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.528 19:53:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.528 19:53:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.528 19:53:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.528 19:53:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.528 19:53:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.528 19:53:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.528 19:53:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.528 19:53:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.528 19:53:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.528 19:53:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.528 19:53:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.786 19:53:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.786 19:53:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.044 19:53:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.302 [2024-07-13 19:53:59.760701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.302 [2024-07-13 19:53:59.796889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.302 [2024-07-13 19:53:59.796892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.302 [2024-07-13 19:53:59.837883] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.302 [2024-07-13 19:53:59.837930] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.585 19:54:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.585 19:54:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:15.585 spdk_app_start Round 2 00:06:15.585 19:54:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3659998 /var/tmp/spdk-nbd.sock 00:06:15.585 19:54:02 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3659998 ']' 00:06:15.585 19:54:02 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.585 19:54:02 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.585 19:54:02 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.585 19:54:02 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.585 19:54:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.585 19:54:02 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.585 19:54:02 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:15.585 19:54:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.585 Malloc0 00:06:15.585 19:54:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.585 Malloc1 00:06:15.585 19:54:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.585 19:54:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.586 19:54:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.586 19:54:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.586 19:54:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.586 19:54:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.586 19:54:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.586 19:54:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.586 19:54:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.586 19:54:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.586 19:54:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.586 19:54:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.586 19:54:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.586 19:54:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.586 19:54:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.586 19:54:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.844 /dev/nbd0 00:06:15.844 19:54:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.844 19:54:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.844 19:54:03 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:15.844 19:54:03 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:15.844 19:54:03 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:15.844 19:54:03 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:15.844 19:54:03 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:15.844 19:54:03 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:15.844 19:54:03 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:15.844 19:54:03 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:15.844 19:54:03 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.844 1+0 records in 00:06:15.844 1+0 records out 00:06:15.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237918 s, 17.2 MB/s 00:06:15.844 19:54:03 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:15.844 19:54:03 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:15.844 19:54:03 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:15.844 19:54:03 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:15.844 19:54:03 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:15.844 19:54:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.844 19:54:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.844 19:54:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.844 /dev/nbd1 00:06:16.103 19:54:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.103 19:54:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.103 19:54:03 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:16.103 19:54:03 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:16.103 19:54:03 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:16.103 19:54:03 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:16.103 19:54:03 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:16.103 19:54:03 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:16.103 19:54:03 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:16.103 19:54:03 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:16.103 19:54:03 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.103 1+0 records in 00:06:16.103 1+0 records out 00:06:16.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285394 s, 14.4 MB/s 00:06:16.103 19:54:03 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:16.103 19:54:03 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:16.103 19:54:03 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:16.103 19:54:03 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:16.103 19:54:03 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:16.103 19:54:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.103 19:54:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.103 19:54:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.103 19:54:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.103 19:54:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.103 19:54:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.103 { 00:06:16.103 "nbd_device": "/dev/nbd0", 00:06:16.103 "bdev_name": "Malloc0" 00:06:16.103 }, 00:06:16.103 { 00:06:16.103 "nbd_device": "/dev/nbd1", 00:06:16.103 "bdev_name": "Malloc1" 00:06:16.103 } 00:06:16.103 ]' 00:06:16.103 19:54:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.103 { 00:06:16.103 "nbd_device": "/dev/nbd0", 00:06:16.103 "bdev_name": "Malloc0" 00:06:16.103 }, 00:06:16.103 { 00:06:16.103 "nbd_device": "/dev/nbd1", 00:06:16.103 "bdev_name": "Malloc1" 00:06:16.103 } 00:06:16.103 ]' 00:06:16.103 19:54:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.362 19:54:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.362 /dev/nbd1' 00:06:16.362 19:54:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.362 /dev/nbd1' 00:06:16.362 19:54:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.362 19:54:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.362 19:54:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.362 19:54:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.362 19:54:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.363 256+0 records in 00:06:16.363 256+0 records out 00:06:16.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106829 s, 98.2 MB/s 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.363 256+0 records in 00:06:16.363 256+0 records out 00:06:16.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208025 s, 50.4 MB/s 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.363 256+0 records in 00:06:16.363 256+0 records out 00:06:16.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214186 s, 49.0 MB/s 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.363 19:54:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.622 19:54:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.880 19:54:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.880 19:54:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.880 19:54:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.880 19:54:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.880 19:54:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.880 19:54:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.880 19:54:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.880 19:54:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.880 19:54:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.880 19:54:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.880 19:54:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.880 19:54:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.880 19:54:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.139 19:54:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:17.399 [2024-07-13 19:54:04.845769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.399 [2024-07-13 19:54:04.882124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.399 [2024-07-13 19:54:04.882128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.399 [2024-07-13 19:54:04.922011] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.399 [2024-07-13 19:54:04.922051] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.759 19:54:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3659998 /var/tmp/spdk-nbd.sock 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3659998 ']' 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:20.759 19:54:07 event.app_repeat -- event/event.sh@39 -- # killprocess 3659998 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3659998 ']' 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3659998 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3659998 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3659998' 00:06:20.759 killing process with pid 3659998 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3659998 00:06:20.759 19:54:07 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3659998 00:06:20.759 spdk_app_start is called in Round 0. 00:06:20.759 Shutdown signal received, stop current app iteration 00:06:20.759 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:20.759 spdk_app_start is called in Round 1. 00:06:20.759 Shutdown signal received, stop current app iteration 00:06:20.759 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:20.759 spdk_app_start is called in Round 2. 00:06:20.759 Shutdown signal received, stop current app iteration 00:06:20.759 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:20.759 spdk_app_start is called in Round 3. 00:06:20.759 Shutdown signal received, stop current app iteration 00:06:20.759 19:54:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:20.759 19:54:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:20.759 00:06:20.759 real 0m15.623s 00:06:20.759 user 0m33.246s 00:06:20.759 sys 0m3.038s 00:06:20.759 19:54:08 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.759 19:54:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.759 ************************************ 00:06:20.759 END TEST app_repeat 00:06:20.759 ************************************ 00:06:20.759 19:54:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:20.759 19:54:08 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:20.759 19:54:08 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:20.759 19:54:08 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.759 19:54:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.759 ************************************ 00:06:20.759 START TEST cpu_locks 00:06:20.759 ************************************ 00:06:20.759 19:54:08 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:20.759 * Looking for test storage... 00:06:20.759 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:20.759 19:54:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:20.759 19:54:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:20.760 19:54:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:20.760 19:54:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:20.760 19:54:08 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:20.760 19:54:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.760 19:54:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.760 ************************************ 00:06:20.760 START TEST default_locks 00:06:20.760 ************************************ 00:06:20.760 19:54:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:20.760 19:54:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3663522 00:06:20.760 19:54:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3663522 00:06:20.760 19:54:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.760 19:54:08 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3663522 ']' 00:06:20.760 19:54:08 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.760 19:54:08 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.760 19:54:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.760 19:54:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.760 19:54:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.760 [2024-07-13 19:54:08.280723] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:20.760 [2024-07-13 19:54:08.280805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663522 ] 00:06:20.760 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.760 [2024-07-13 19:54:08.349634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.760 [2024-07-13 19:54:08.389006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.018 19:54:08 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:21.018 19:54:08 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:21.018 19:54:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3663522 00:06:21.018 19:54:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3663522 00:06:21.018 19:54:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.585 lslocks: write error 00:06:21.585 19:54:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3663522 00:06:21.585 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3663522 ']' 00:06:21.585 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3663522 00:06:21.585 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:21.585 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:21.585 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3663522 00:06:21.585 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:21.585 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:21.585 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3663522' 00:06:21.585 killing process with pid 3663522 00:06:21.585 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3663522 00:06:21.585 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3663522 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3663522 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3663522 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3663522 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3663522 ']' 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.844 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3663522) - No such process 00:06:21.844 ERROR: process (pid: 3663522) is no longer running 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.844 00:06:21.844 real 0m1.212s 00:06:21.844 user 0m1.163s 00:06:21.844 sys 0m0.599s 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.844 19:54:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.844 ************************************ 00:06:21.844 END TEST default_locks 00:06:21.844 ************************************ 00:06:22.103 19:54:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:22.103 19:54:09 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:22.103 19:54:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.103 19:54:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.103 ************************************ 00:06:22.103 START TEST default_locks_via_rpc 00:06:22.103 ************************************ 00:06:22.103 19:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:22.103 19:54:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3663743 00:06:22.103 19:54:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3663743 00:06:22.103 19:54:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.103 19:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3663743 ']' 00:06:22.103 19:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.103 19:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:22.103 19:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.103 19:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:22.103 19:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.103 [2024-07-13 19:54:09.566992] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:22.103 [2024-07-13 19:54:09.567048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663743 ] 00:06:22.103 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.103 [2024-07-13 19:54:09.634222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.103 [2024-07-13 19:54:09.674252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3663743 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3663743 00:06:22.361 19:54:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.929 19:54:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3663743 00:06:22.929 19:54:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3663743 ']' 00:06:22.929 19:54:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3663743 00:06:22.929 19:54:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:22.929 19:54:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:22.929 19:54:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3663743 00:06:22.929 19:54:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:22.929 19:54:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:22.929 19:54:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3663743' 00:06:22.929 killing process with pid 3663743 00:06:22.929 19:54:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3663743 00:06:22.929 19:54:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3663743 00:06:23.189 00:06:23.189 real 0m1.260s 00:06:23.189 user 0m1.213s 00:06:23.189 sys 0m0.604s 00:06:23.189 19:54:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.189 19:54:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.189 ************************************ 00:06:23.189 END TEST default_locks_via_rpc 00:06:23.189 ************************************ 00:06:23.189 19:54:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:23.189 19:54:10 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.189 19:54:10 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.189 19:54:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.449 ************************************ 00:06:23.449 START TEST non_locking_app_on_locked_coremask 00:06:23.449 ************************************ 00:06:23.449 19:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:23.449 19:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3664043 00:06:23.449 19:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3664043 /var/tmp/spdk.sock 00:06:23.449 19:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.449 19:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3664043 ']' 00:06:23.449 19:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.449 19:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.449 19:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.449 19:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.449 19:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.449 [2024-07-13 19:54:10.908106] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:23.449 [2024-07-13 19:54:10.908186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664043 ] 00:06:23.449 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.449 [2024-07-13 19:54:10.976950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.449 [2024-07-13 19:54:11.016069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.709 19:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.710 19:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:23.710 19:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3664059 00:06:23.710 19:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3664059 /var/tmp/spdk2.sock 00:06:23.710 19:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:23.710 19:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3664059 ']' 00:06:23.710 19:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.710 19:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.710 19:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.710 19:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.710 19:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.710 [2024-07-13 19:54:11.220888] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:23.710 [2024-07-13 19:54:11.220978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664059 ] 00:06:23.710 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.710 [2024-07-13 19:54:11.314699] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.710 [2024-07-13 19:54:11.314724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.969 [2024-07-13 19:54:11.394180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.539 19:54:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.540 19:54:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:24.540 19:54:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3664043 00:06:24.540 19:54:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3664043 00:06:24.540 19:54:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.476 lslocks: write error 00:06:25.476 19:54:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3664043 00:06:25.476 19:54:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3664043 ']' 00:06:25.476 19:54:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3664043 00:06:25.476 19:54:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:25.476 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.476 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3664043 00:06:25.476 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.476 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.476 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3664043' 00:06:25.476 killing process with pid 3664043 00:06:25.476 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3664043 00:06:25.476 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3664043 00:06:26.069 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3664059 00:06:26.069 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3664059 ']' 00:06:26.069 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3664059 00:06:26.069 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:26.069 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:26.069 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3664059 00:06:26.069 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:26.069 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:26.069 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3664059' 00:06:26.069 killing process with pid 3664059 00:06:26.069 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3664059 00:06:26.069 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3664059 00:06:26.328 00:06:26.328 real 0m3.081s 00:06:26.328 user 0m3.186s 00:06:26.328 sys 0m1.167s 00:06:26.328 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.328 19:54:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.328 ************************************ 00:06:26.328 END TEST non_locking_app_on_locked_coremask 00:06:26.328 ************************************ 00:06:26.588 19:54:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:26.588 19:54:14 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:26.588 19:54:14 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.588 19:54:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.588 ************************************ 00:06:26.588 START TEST locking_app_on_unlocked_coremask 00:06:26.588 ************************************ 00:06:26.588 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:26.588 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3664613 00:06:26.588 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3664613 /var/tmp/spdk.sock 00:06:26.588 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:26.588 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3664613 ']' 00:06:26.588 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.588 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.588 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.588 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.588 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.588 [2024-07-13 19:54:14.065359] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:26.588 [2024-07-13 19:54:14.065420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664613 ] 00:06:26.588 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.588 [2024-07-13 19:54:14.132426] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.588 [2024-07-13 19:54:14.132452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.588 [2024-07-13 19:54:14.172154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.847 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.847 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:26.847 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3664659 00:06:26.847 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3664659 /var/tmp/spdk2.sock 00:06:26.847 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:26.847 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3664659 ']' 00:06:26.847 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.847 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.847 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.847 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.847 19:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.847 [2024-07-13 19:54:14.385324] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:26.847 [2024-07-13 19:54:14.385412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664659 ] 00:06:26.847 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.847 [2024-07-13 19:54:14.480506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.106 [2024-07-13 19:54:14.560483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.674 19:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.674 19:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:27.674 19:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3664659 00:06:27.674 19:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.674 19:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3664659 00:06:29.052 lslocks: write error 00:06:29.052 19:54:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3664613 00:06:29.052 19:54:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3664613 ']' 00:06:29.052 19:54:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3664613 00:06:29.052 19:54:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:29.052 19:54:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.052 19:54:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3664613 00:06:29.052 19:54:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.052 19:54:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.052 19:54:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3664613' 00:06:29.052 killing process with pid 3664613 00:06:29.052 19:54:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3664613 00:06:29.052 19:54:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3664613 00:06:29.621 19:54:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3664659 00:06:29.621 19:54:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3664659 ']' 00:06:29.621 19:54:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3664659 00:06:29.621 19:54:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:29.621 19:54:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.621 19:54:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3664659 00:06:29.621 19:54:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.621 19:54:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.621 19:54:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3664659' 00:06:29.621 killing process with pid 3664659 00:06:29.621 19:54:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3664659 00:06:29.621 19:54:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3664659 00:06:29.880 00:06:29.880 real 0m3.333s 00:06:29.880 user 0m3.438s 00:06:29.880 sys 0m1.277s 00:06:29.880 19:54:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.880 19:54:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.880 ************************************ 00:06:29.880 END TEST locking_app_on_unlocked_coremask 00:06:29.880 ************************************ 00:06:29.880 19:54:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:29.880 19:54:17 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.880 19:54:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.880 19:54:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.880 ************************************ 00:06:29.880 START TEST locking_app_on_locked_coremask 00:06:29.880 ************************************ 00:06:29.880 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:29.880 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3665229 00:06:29.880 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3665229 /var/tmp/spdk.sock 00:06:29.880 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.880 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3665229 ']' 00:06:29.880 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.880 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.880 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.880 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.880 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.880 [2024-07-13 19:54:17.478438] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:29.880 [2024-07-13 19:54:17.478528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665229 ] 00:06:29.880 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.140 [2024-07-13 19:54:17.546023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.140 [2024-07-13 19:54:17.585780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3665387 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3665387 /var/tmp/spdk2.sock 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3665387 /var/tmp/spdk2.sock 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3665387 /var/tmp/spdk2.sock 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3665387 ']' 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.140 19:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.140 [2024-07-13 19:54:17.796434] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:30.140 [2024-07-13 19:54:17.796525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665387 ] 00:06:30.399 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.399 [2024-07-13 19:54:17.890578] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3665229 has claimed it. 00:06:30.399 [2024-07-13 19:54:17.890612] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.967 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3665387) - No such process 00:06:30.967 ERROR: process (pid: 3665387) is no longer running 00:06:30.967 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.967 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:30.967 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:30.967 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.967 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.967 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.967 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3665229 00:06:30.967 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3665229 00:06:30.967 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.227 lslocks: write error 00:06:31.227 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3665229 00:06:31.227 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3665229 ']' 00:06:31.227 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3665229 00:06:31.227 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:31.227 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:31.227 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3665229 00:06:31.227 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:31.227 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:31.227 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3665229' 00:06:31.227 killing process with pid 3665229 00:06:31.227 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3665229 00:06:31.227 19:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3665229 00:06:31.486 00:06:31.486 real 0m1.666s 00:06:31.486 user 0m1.746s 00:06:31.486 sys 0m0.596s 00:06:31.486 19:54:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.486 19:54:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.486 ************************************ 00:06:31.486 END TEST locking_app_on_locked_coremask 00:06:31.486 ************************************ 00:06:31.746 19:54:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:31.746 19:54:19 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.746 19:54:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.746 19:54:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.746 ************************************ 00:06:31.746 START TEST locking_overlapped_coremask 00:06:31.746 ************************************ 00:06:31.746 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:31.746 19:54:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:31.746 19:54:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3665617 00:06:31.746 19:54:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3665617 /var/tmp/spdk.sock 00:06:31.746 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3665617 ']' 00:06:31.746 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.746 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.746 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.746 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.746 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.746 [2024-07-13 19:54:19.201483] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:31.746 [2024-07-13 19:54:19.201535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665617 ] 00:06:31.746 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.746 [2024-07-13 19:54:19.265719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.746 [2024-07-13 19:54:19.307476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.746 [2024-07-13 19:54:19.311457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.746 [2024-07-13 19:54:19.311460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3665736 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3665736 /var/tmp/spdk2.sock 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3665736 /var/tmp/spdk2.sock 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3665736 /var/tmp/spdk2.sock 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3665736 ']' 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.005 19:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.005 [2024-07-13 19:54:19.534674] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:32.005 [2024-07-13 19:54:19.534769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665736 ] 00:06:32.005 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.005 [2024-07-13 19:54:19.630947] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3665617 has claimed it. 00:06:32.005 [2024-07-13 19:54:19.630982] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:32.573 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3665736) - No such process 00:06:32.573 ERROR: process (pid: 3665736) is no longer running 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3665617 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3665617 ']' 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3665617 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3665617 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:32.573 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:32.832 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3665617' 00:06:32.832 killing process with pid 3665617 00:06:32.832 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3665617 00:06:32.832 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3665617 00:06:33.092 00:06:33.092 real 0m1.339s 00:06:33.092 user 0m3.685s 00:06:33.092 sys 0m0.396s 00:06:33.092 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.092 19:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.092 ************************************ 00:06:33.092 END TEST locking_overlapped_coremask 00:06:33.092 ************************************ 00:06:33.092 19:54:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:33.092 19:54:20 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.092 19:54:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.092 19:54:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.092 ************************************ 00:06:33.092 START TEST locking_overlapped_coremask_via_rpc 00:06:33.092 ************************************ 00:06:33.092 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:33.092 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3665863 00:06:33.092 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3665863 /var/tmp/spdk.sock 00:06:33.092 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3665863 ']' 00:06:33.092 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.092 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.092 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.092 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.092 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.092 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:33.092 [2024-07-13 19:54:20.631718] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:33.092 [2024-07-13 19:54:20.631775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665863 ] 00:06:33.092 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.092 [2024-07-13 19:54:20.698581] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.092 [2024-07-13 19:54:20.698604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.092 [2024-07-13 19:54:20.739768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.092 [2024-07-13 19:54:20.739860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.092 [2024-07-13 19:54:20.739860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.351 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.351 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:33.351 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3666025 00:06:33.351 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3666025 /var/tmp/spdk2.sock 00:06:33.351 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:33.351 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3666025 ']' 00:06:33.351 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.351 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.351 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.351 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.351 19:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.351 [2024-07-13 19:54:20.944202] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:33.351 [2024-07-13 19:54:20.944296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666025 ] 00:06:33.351 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.610 [2024-07-13 19:54:21.036148] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.610 [2024-07-13 19:54:21.036174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.610 [2024-07-13 19:54:21.119513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.610 [2024-07-13 19:54:21.119627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.610 [2024-07-13 19:54:21.119628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.175 [2024-07-13 19:54:21.803505] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3665863 has claimed it. 00:06:34.175 request: 00:06:34.175 { 00:06:34.175 "method": "framework_enable_cpumask_locks", 00:06:34.175 "req_id": 1 00:06:34.175 } 00:06:34.175 Got JSON-RPC error response 00:06:34.175 response: 00:06:34.175 { 00:06:34.175 "code": -32603, 00:06:34.175 "message": "Failed to claim CPU core: 2" 00:06:34.175 } 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3665863 /var/tmp/spdk.sock 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3665863 ']' 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.175 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.433 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.433 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:34.433 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3666025 /var/tmp/spdk2.sock 00:06:34.433 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3666025 ']' 00:06:34.433 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.433 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.433 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.433 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.433 19:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.690 19:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.690 19:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:34.690 19:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:34.690 19:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.690 19:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.690 19:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.690 00:06:34.690 real 0m1.563s 00:06:34.690 user 0m0.668s 00:06:34.690 sys 0m0.187s 00:06:34.690 19:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.691 19:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.691 ************************************ 00:06:34.691 END TEST locking_overlapped_coremask_via_rpc 00:06:34.691 ************************************ 00:06:34.691 19:54:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:34.691 19:54:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3665863 ]] 00:06:34.691 19:54:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3665863 00:06:34.691 19:54:22 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3665863 ']' 00:06:34.691 19:54:22 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3665863 00:06:34.691 19:54:22 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:34.691 19:54:22 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:34.691 19:54:22 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3665863 00:06:34.691 19:54:22 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:34.691 19:54:22 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:34.691 19:54:22 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3665863' 00:06:34.691 killing process with pid 3665863 00:06:34.691 19:54:22 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3665863 00:06:34.691 19:54:22 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3665863 00:06:34.949 19:54:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3666025 ]] 00:06:34.949 19:54:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3666025 00:06:34.949 19:54:22 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3666025 ']' 00:06:34.949 19:54:22 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3666025 00:06:34.949 19:54:22 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:34.949 19:54:22 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:34.949 19:54:22 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3666025 00:06:35.209 19:54:22 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:35.209 19:54:22 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:35.209 19:54:22 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3666025' 00:06:35.209 killing process with pid 3666025 00:06:35.209 19:54:22 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3666025 00:06:35.209 19:54:22 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3666025 00:06:35.467 19:54:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.467 19:54:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:35.467 19:54:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3665863 ]] 00:06:35.467 19:54:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3665863 00:06:35.467 19:54:22 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3665863 ']' 00:06:35.467 19:54:22 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3665863 00:06:35.467 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3665863) - No such process 00:06:35.467 19:54:22 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3665863 is not found' 00:06:35.467 Process with pid 3665863 is not found 00:06:35.467 19:54:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3666025 ]] 00:06:35.467 19:54:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3666025 00:06:35.467 19:54:22 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3666025 ']' 00:06:35.467 19:54:22 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3666025 00:06:35.467 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3666025) - No such process 00:06:35.467 19:54:22 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3666025 is not found' 00:06:35.467 Process with pid 3666025 is not found 00:06:35.467 19:54:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.467 00:06:35.467 real 0m14.808s 00:06:35.467 user 0m24.284s 00:06:35.467 sys 0m5.826s 00:06:35.468 19:54:22 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.468 19:54:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.468 ************************************ 00:06:35.468 END TEST cpu_locks 00:06:35.468 ************************************ 00:06:35.468 00:06:35.468 real 0m38.996s 00:06:35.468 user 1m11.886s 00:06:35.468 sys 0m9.923s 00:06:35.468 19:54:22 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.468 19:54:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.468 ************************************ 00:06:35.468 END TEST event 00:06:35.468 ************************************ 00:06:35.468 19:54:23 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:35.468 19:54:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:35.468 19:54:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.468 19:54:23 -- common/autotest_common.sh@10 -- # set +x 00:06:35.468 ************************************ 00:06:35.468 START TEST thread 00:06:35.468 ************************************ 00:06:35.468 19:54:23 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:35.468 * Looking for test storage... 00:06:35.727 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:35.727 19:54:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.727 19:54:23 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:35.727 19:54:23 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.727 19:54:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.727 ************************************ 00:06:35.727 START TEST thread_poller_perf 00:06:35.727 ************************************ 00:06:35.727 19:54:23 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.727 [2024-07-13 19:54:23.187123] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:35.727 [2024-07-13 19:54:23.187248] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666417 ] 00:06:35.727 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.727 [2024-07-13 19:54:23.258449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.727 [2024-07-13 19:54:23.297164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.727 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:37.103 ====================================== 00:06:37.103 busy:2503698480 (cyc) 00:06:37.103 total_run_count: 858000 00:06:37.103 tsc_hz: 2500000000 (cyc) 00:06:37.103 ====================================== 00:06:37.103 poller_cost: 2918 (cyc), 1167 (nsec) 00:06:37.103 00:06:37.103 real 0m1.186s 00:06:37.103 user 0m1.082s 00:06:37.103 sys 0m0.100s 00:06:37.103 19:54:24 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.103 19:54:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.103 ************************************ 00:06:37.103 END TEST thread_poller_perf 00:06:37.103 ************************************ 00:06:37.103 19:54:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:37.103 19:54:24 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:37.103 19:54:24 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.103 19:54:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.103 ************************************ 00:06:37.103 START TEST thread_poller_perf 00:06:37.103 ************************************ 00:06:37.103 19:54:24 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:37.103 [2024-07-13 19:54:24.446339] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:37.103 [2024-07-13 19:54:24.446417] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666700 ] 00:06:37.103 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.103 [2024-07-13 19:54:24.514739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.103 [2024-07-13 19:54:24.551935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.103 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:38.040 ====================================== 00:06:38.040 busy:2501466502 (cyc) 00:06:38.040 total_run_count: 14292000 00:06:38.040 tsc_hz: 2500000000 (cyc) 00:06:38.040 ====================================== 00:06:38.040 poller_cost: 175 (cyc), 70 (nsec) 00:06:38.040 00:06:38.040 real 0m1.175s 00:06:38.040 user 0m1.083s 00:06:38.040 sys 0m0.088s 00:06:38.040 19:54:25 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.040 19:54:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.040 ************************************ 00:06:38.040 END TEST thread_poller_perf 00:06:38.040 ************************************ 00:06:38.040 19:54:25 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:38.040 19:54:25 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:38.040 19:54:25 thread -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.040 19:54:25 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.040 19:54:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.040 ************************************ 00:06:38.040 START TEST thread_spdk_lock 00:06:38.040 ************************************ 00:06:38.040 19:54:25 thread.thread_spdk_lock -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:38.040 [2024-07-13 19:54:25.694466] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:38.040 [2024-07-13 19:54:25.694584] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666980 ] 00:06:38.299 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.299 [2024-07-13 19:54:25.765314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.299 [2024-07-13 19:54:25.803711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.299 [2024-07-13 19:54:25.803714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.868 [2024-07-13 19:54:26.299364] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:38.868 [2024-07-13 19:54:26.299398] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:38.868 [2024-07-13 19:54:26.299409] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x13107c0 00:06:38.868 [2024-07-13 19:54:26.300287] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:38.868 [2024-07-13 19:54:26.300390] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:38.868 [2024-07-13 19:54:26.300408] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:38.868 Starting test contend 00:06:38.868 Worker Delay Wait us Hold us Total us 00:06:38.868 0 3 176980 187392 364373 00:06:38.868 1 5 94656 288695 383352 00:06:38.868 PASS test contend 00:06:38.868 Starting test hold_by_poller 00:06:38.868 PASS test hold_by_poller 00:06:38.868 Starting test hold_by_message 00:06:38.868 PASS test hold_by_message 00:06:38.868 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:38.868 100014 assertions passed 00:06:38.868 0 assertions failed 00:06:38.868 00:06:38.868 real 0m0.674s 00:06:38.868 user 0m1.065s 00:06:38.868 sys 0m0.102s 00:06:38.868 19:54:26 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.868 19:54:26 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:06:38.868 ************************************ 00:06:38.868 END TEST thread_spdk_lock 00:06:38.868 ************************************ 00:06:38.868 00:06:38.868 real 0m3.341s 00:06:38.868 user 0m3.320s 00:06:38.868 sys 0m0.526s 00:06:38.868 19:54:26 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.868 19:54:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.868 ************************************ 00:06:38.868 END TEST thread 00:06:38.868 ************************************ 00:06:38.868 19:54:26 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:38.868 19:54:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.868 19:54:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.868 19:54:26 -- common/autotest_common.sh@10 -- # set +x 00:06:38.868 ************************************ 00:06:38.868 START TEST accel 00:06:38.868 ************************************ 00:06:38.868 19:54:26 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:39.127 * Looking for test storage... 00:06:39.127 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:39.127 19:54:26 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:39.127 19:54:26 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:39.127 19:54:26 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:39.127 19:54:26 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3667056 00:06:39.127 19:54:26 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:39.127 19:54:26 accel -- accel/accel.sh@63 -- # waitforlisten 3667056 00:06:39.127 19:54:26 accel -- common/autotest_common.sh@827 -- # '[' -z 3667056 ']' 00:06:39.127 19:54:26 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.127 19:54:26 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:39.127 19:54:26 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.127 19:54:26 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.127 19:54:26 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.127 19:54:26 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.127 19:54:26 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.127 19:54:26 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.127 19:54:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.127 19:54:26 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.127 19:54:26 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.127 19:54:26 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:39.127 19:54:26 accel -- accel/accel.sh@41 -- # jq -r . 00:06:39.127 [2024-07-13 19:54:26.597667] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:39.127 [2024-07-13 19:54:26.597744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667056 ] 00:06:39.127 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.127 [2024-07-13 19:54:26.665523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.127 [2024-07-13 19:54:26.706502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.386 19:54:26 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.386 19:54:26 accel -- common/autotest_common.sh@860 -- # return 0 00:06:39.386 19:54:26 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:39.386 19:54:26 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:39.386 19:54:26 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:39.386 19:54:26 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:39.386 19:54:26 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:39.386 19:54:26 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:39.386 19:54:26 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.386 19:54:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.386 19:54:26 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:39.386 19:54:26 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.386 19:54:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.386 19:54:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.386 19:54:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.386 19:54:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.386 19:54:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.387 19:54:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.387 19:54:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.387 19:54:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.387 19:54:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.387 19:54:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.387 19:54:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.387 19:54:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.387 19:54:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.387 19:54:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.387 19:54:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.387 19:54:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.387 19:54:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.387 19:54:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.387 19:54:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.387 19:54:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.387 19:54:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.387 19:54:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.387 19:54:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.387 19:54:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.387 19:54:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.387 19:54:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.387 19:54:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.387 19:54:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.387 19:54:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.387 19:54:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.387 19:54:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.387 19:54:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.387 19:54:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.387 19:54:26 accel -- accel/accel.sh@75 -- # killprocess 3667056 00:06:39.387 19:54:26 accel -- common/autotest_common.sh@946 -- # '[' -z 3667056 ']' 00:06:39.387 19:54:26 accel -- common/autotest_common.sh@950 -- # kill -0 3667056 00:06:39.387 19:54:26 accel -- common/autotest_common.sh@951 -- # uname 00:06:39.387 19:54:26 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:39.387 19:54:26 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3667056 00:06:39.387 19:54:26 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:39.387 19:54:26 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:39.387 19:54:26 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3667056' 00:06:39.387 killing process with pid 3667056 00:06:39.387 19:54:26 accel -- common/autotest_common.sh@965 -- # kill 3667056 00:06:39.387 19:54:26 accel -- common/autotest_common.sh@970 -- # wait 3667056 00:06:39.646 19:54:27 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:39.646 19:54:27 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:39.646 19:54:27 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:39.646 19:54:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.646 19:54:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.646 19:54:27 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:39.646 19:54:27 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:39.646 19:54:27 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:39.646 19:54:27 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.646 19:54:27 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.646 19:54:27 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.646 19:54:27 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.646 19:54:27 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.646 19:54:27 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:39.646 19:54:27 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:39.905 19:54:27 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.905 19:54:27 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:39.905 19:54:27 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:39.905 19:54:27 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:39.906 19:54:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.906 19:54:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.906 ************************************ 00:06:39.906 START TEST accel_missing_filename 00:06:39.906 ************************************ 00:06:39.906 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:39.906 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:39.906 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:39.906 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:39.906 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.906 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:39.906 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.906 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:39.906 19:54:27 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:39.906 19:54:27 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:39.906 19:54:27 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.906 19:54:27 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.906 19:54:27 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.906 19:54:27 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.906 19:54:27 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.906 19:54:27 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:39.906 19:54:27 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:39.906 [2024-07-13 19:54:27.411106] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:39.906 [2024-07-13 19:54:27.411190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667350 ] 00:06:39.906 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.906 [2024-07-13 19:54:27.482859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.906 [2024-07-13 19:54:27.522685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.906 [2024-07-13 19:54:27.562372] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.165 [2024-07-13 19:54:27.622113] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:40.165 A filename is required. 00:06:40.165 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:40.165 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.165 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:40.165 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:40.165 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:40.165 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.165 00:06:40.165 real 0m0.292s 00:06:40.165 user 0m0.193s 00:06:40.165 sys 0m0.142s 00:06:40.165 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.165 19:54:27 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:40.165 ************************************ 00:06:40.165 END TEST accel_missing_filename 00:06:40.165 ************************************ 00:06:40.165 19:54:27 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:40.165 19:54:27 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:40.165 19:54:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.165 19:54:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.165 ************************************ 00:06:40.165 START TEST accel_compress_verify 00:06:40.165 ************************************ 00:06:40.165 19:54:27 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:40.165 19:54:27 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:40.165 19:54:27 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:40.165 19:54:27 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:40.165 19:54:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.165 19:54:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:40.165 19:54:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.165 19:54:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:40.165 19:54:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:40.165 19:54:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:40.165 19:54:27 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.165 19:54:27 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.165 19:54:27 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.165 19:54:27 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.165 19:54:27 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.165 19:54:27 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:40.165 19:54:27 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:40.165 [2024-07-13 19:54:27.773341] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:40.165 [2024-07-13 19:54:27.773420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667373 ] 00:06:40.165 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.424 [2024-07-13 19:54:27.844071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.424 [2024-07-13 19:54:27.881496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.424 [2024-07-13 19:54:27.920940] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.424 [2024-07-13 19:54:27.980530] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:40.424 00:06:40.424 Compression does not support the verify option, aborting. 00:06:40.424 19:54:28 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:40.424 19:54:28 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.424 19:54:28 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:40.424 19:54:28 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:40.424 19:54:28 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:40.424 19:54:28 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.424 00:06:40.424 real 0m0.289s 00:06:40.424 user 0m0.201s 00:06:40.424 sys 0m0.128s 00:06:40.424 19:54:28 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.424 19:54:28 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:40.424 ************************************ 00:06:40.424 END TEST accel_compress_verify 00:06:40.424 ************************************ 00:06:40.424 19:54:28 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:40.424 19:54:28 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:40.424 19:54:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.424 19:54:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.684 ************************************ 00:06:40.684 START TEST accel_wrong_workload 00:06:40.684 ************************************ 00:06:40.684 19:54:28 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:40.684 19:54:28 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:40.684 19:54:28 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:40.684 19:54:28 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:40.684 19:54:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.684 19:54:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:40.684 19:54:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.684 19:54:28 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:40.684 19:54:28 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:40.684 19:54:28 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:40.684 19:54:28 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.684 19:54:28 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.684 19:54:28 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.684 19:54:28 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.684 19:54:28 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.684 19:54:28 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:40.684 19:54:28 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:40.684 Unsupported workload type: foobar 00:06:40.684 [2024-07-13 19:54:28.138535] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:40.684 accel_perf options: 00:06:40.684 [-h help message] 00:06:40.684 [-q queue depth per core] 00:06:40.684 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:40.684 [-T number of threads per core 00:06:40.684 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:40.684 [-t time in seconds] 00:06:40.684 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:40.684 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:40.684 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:40.684 [-l for compress/decompress workloads, name of uncompressed input file 00:06:40.684 [-S for crc32c workload, use this seed value (default 0) 00:06:40.684 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:40.684 [-f for fill workload, use this BYTE value (default 255) 00:06:40.684 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:40.684 [-y verify result if this switch is on] 00:06:40.684 [-a tasks to allocate per core (default: same value as -q)] 00:06:40.684 Can be used to spread operations across a wider range of memory. 00:06:40.684 19:54:28 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:40.684 19:54:28 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.684 19:54:28 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.684 19:54:28 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.684 00:06:40.684 real 0m0.029s 00:06:40.684 user 0m0.014s 00:06:40.684 sys 0m0.015s 00:06:40.684 19:54:28 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.684 19:54:28 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:40.684 ************************************ 00:06:40.684 END TEST accel_wrong_workload 00:06:40.684 ************************************ 00:06:40.684 Error: writing output failed: Broken pipe 00:06:40.684 19:54:28 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:40.684 19:54:28 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:40.684 19:54:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.684 19:54:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.684 ************************************ 00:06:40.684 START TEST accel_negative_buffers 00:06:40.684 ************************************ 00:06:40.684 19:54:28 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:40.684 19:54:28 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:40.684 19:54:28 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:40.684 19:54:28 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:40.684 19:54:28 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.684 19:54:28 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:40.684 19:54:28 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.684 19:54:28 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:40.685 19:54:28 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:40.685 19:54:28 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:40.685 19:54:28 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.685 19:54:28 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.685 19:54:28 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.685 19:54:28 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.685 19:54:28 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.685 19:54:28 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:40.685 19:54:28 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:40.685 -x option must be non-negative. 00:06:40.685 [2024-07-13 19:54:28.245080] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:40.685 accel_perf options: 00:06:40.685 [-h help message] 00:06:40.685 [-q queue depth per core] 00:06:40.685 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:40.685 [-T number of threads per core 00:06:40.685 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:40.685 [-t time in seconds] 00:06:40.685 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:40.685 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:40.685 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:40.685 [-l for compress/decompress workloads, name of uncompressed input file 00:06:40.685 [-S for crc32c workload, use this seed value (default 0) 00:06:40.685 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:40.685 [-f for fill workload, use this BYTE value (default 255) 00:06:40.685 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:40.685 [-y verify result if this switch is on] 00:06:40.685 [-a tasks to allocate per core (default: same value as -q)] 00:06:40.685 Can be used to spread operations across a wider range of memory. 00:06:40.685 19:54:28 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:40.685 19:54:28 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.685 19:54:28 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.685 19:54:28 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.685 00:06:40.685 real 0m0.028s 00:06:40.685 user 0m0.009s 00:06:40.685 sys 0m0.019s 00:06:40.685 19:54:28 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.685 19:54:28 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:40.685 ************************************ 00:06:40.685 END TEST accel_negative_buffers 00:06:40.685 ************************************ 00:06:40.685 Error: writing output failed: Broken pipe 00:06:40.685 19:54:28 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:40.685 19:54:28 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:40.685 19:54:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.685 19:54:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.685 ************************************ 00:06:40.685 START TEST accel_crc32c 00:06:40.685 ************************************ 00:06:40.685 19:54:28 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:40.685 19:54:28 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:40.685 19:54:28 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:40.685 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.685 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.685 19:54:28 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:40.685 19:54:28 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:40.685 19:54:28 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:40.685 19:54:28 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.685 19:54:28 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.685 19:54:28 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.685 19:54:28 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.685 19:54:28 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.685 19:54:28 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:40.685 19:54:28 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:40.685 [2024-07-13 19:54:28.342324] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:40.685 [2024-07-13 19:54:28.342416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667529 ] 00:06:40.945 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.945 [2024-07-13 19:54:28.411712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.945 [2024-07-13 19:54:28.449379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.945 19:54:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:42.327 19:54:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.327 00:06:42.327 real 0m1.293s 00:06:42.327 user 0m1.176s 00:06:42.327 sys 0m0.131s 00:06:42.327 19:54:29 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.327 19:54:29 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:42.327 ************************************ 00:06:42.327 END TEST accel_crc32c 00:06:42.327 ************************************ 00:06:42.327 19:54:29 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:42.327 19:54:29 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:42.327 19:54:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.327 19:54:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.327 ************************************ 00:06:42.327 START TEST accel_crc32c_C2 00:06:42.327 ************************************ 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:42.327 [2024-07-13 19:54:29.693029] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:42.327 [2024-07-13 19:54:29.693102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667730 ] 00:06:42.327 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.327 [2024-07-13 19:54:29.760600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.327 [2024-07-13 19:54:29.798051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.327 19:54:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.310 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.311 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.311 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.311 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.311 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.311 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.311 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.311 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.311 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:43.311 19:54:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.311 00:06:43.311 real 0m1.290s 00:06:43.311 user 0m1.176s 00:06:43.311 sys 0m0.130s 00:06:43.311 19:54:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.311 19:54:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:43.311 ************************************ 00:06:43.311 END TEST accel_crc32c_C2 00:06:43.311 ************************************ 00:06:43.570 19:54:30 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:43.570 19:54:30 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:43.570 19:54:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.570 19:54:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.570 ************************************ 00:06:43.570 START TEST accel_copy 00:06:43.570 ************************************ 00:06:43.570 19:54:31 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:43.570 [2024-07-13 19:54:31.060369] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:43.570 [2024-07-13 19:54:31.060518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668009 ] 00:06:43.570 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.570 [2024-07-13 19:54:31.128747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.570 [2024-07-13 19:54:31.166188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.570 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.571 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.571 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:43.571 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.571 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.571 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.571 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.571 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.571 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.571 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.571 19:54:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.571 19:54:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.571 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.571 19:54:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:44.949 19:54:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.949 00:06:44.949 real 0m1.293s 00:06:44.949 user 0m1.176s 00:06:44.949 sys 0m0.130s 00:06:44.949 19:54:32 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.949 19:54:32 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:44.949 ************************************ 00:06:44.949 END TEST accel_copy 00:06:44.949 ************************************ 00:06:44.949 19:54:32 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.949 19:54:32 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:44.949 19:54:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.949 19:54:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.949 ************************************ 00:06:44.949 START TEST accel_fill 00:06:44.949 ************************************ 00:06:44.949 19:54:32 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.949 19:54:32 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:44.949 19:54:32 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:44.949 19:54:32 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.949 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.949 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.949 19:54:32 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.949 19:54:32 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:44.949 19:54:32 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.949 19:54:32 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.949 19:54:32 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.949 19:54:32 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.949 19:54:32 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.949 19:54:32 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:44.950 [2024-07-13 19:54:32.408497] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:44.950 [2024-07-13 19:54:32.408550] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668300 ] 00:06:44.950 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.950 [2024-07-13 19:54:32.467141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.950 [2024-07-13 19:54:32.504600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.950 19:54:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:46.328 19:54:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.328 00:06:46.328 real 0m1.273s 00:06:46.328 user 0m1.165s 00:06:46.328 sys 0m0.122s 00:06:46.328 19:54:33 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.328 19:54:33 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:46.328 ************************************ 00:06:46.328 END TEST accel_fill 00:06:46.328 ************************************ 00:06:46.328 19:54:33 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:46.328 19:54:33 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:46.328 19:54:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.328 19:54:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.328 ************************************ 00:06:46.328 START TEST accel_copy_crc32c 00:06:46.328 ************************************ 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:46.328 [2024-07-13 19:54:33.746534] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:46.328 [2024-07-13 19:54:33.746578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668580 ] 00:06:46.328 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.328 [2024-07-13 19:54:33.810168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.328 [2024-07-13 19:54:33.847305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.328 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.329 19:54:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.706 00:06:47.706 real 0m1.278s 00:06:47.706 user 0m1.173s 00:06:47.706 sys 0m0.119s 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.706 19:54:35 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:47.706 ************************************ 00:06:47.706 END TEST accel_copy_crc32c 00:06:47.706 ************************************ 00:06:47.706 19:54:35 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:47.706 19:54:35 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:47.706 19:54:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.706 19:54:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.706 ************************************ 00:06:47.706 START TEST accel_copy_crc32c_C2 00:06:47.706 ************************************ 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:47.706 [2024-07-13 19:54:35.097060] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:47.706 [2024-07-13 19:54:35.097100] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668861 ] 00:06:47.706 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.706 [2024-07-13 19:54:35.158808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.706 [2024-07-13 19:54:35.195957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.706 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.707 19:54:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.085 00:06:49.085 real 0m1.275s 00:06:49.085 user 0m1.170s 00:06:49.085 sys 0m0.122s 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.085 19:54:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:49.085 ************************************ 00:06:49.085 END TEST accel_copy_crc32c_C2 00:06:49.085 ************************************ 00:06:49.086 19:54:36 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:49.086 19:54:36 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:49.086 19:54:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.086 19:54:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.086 ************************************ 00:06:49.086 START TEST accel_dualcast 00:06:49.086 ************************************ 00:06:49.086 19:54:36 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:49.086 [2024-07-13 19:54:36.446806] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:49.086 [2024-07-13 19:54:36.446857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669146 ] 00:06:49.086 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.086 [2024-07-13 19:54:36.511329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.086 [2024-07-13 19:54:36.548707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:49.086 19:54:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:50.465 19:54:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.465 00:06:50.465 real 0m1.280s 00:06:50.465 user 0m1.174s 00:06:50.465 sys 0m0.120s 00:06:50.465 19:54:37 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.465 19:54:37 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:50.465 ************************************ 00:06:50.465 END TEST accel_dualcast 00:06:50.465 ************************************ 00:06:50.465 19:54:37 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:50.465 19:54:37 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:50.465 19:54:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.465 19:54:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.466 ************************************ 00:06:50.466 START TEST accel_compare 00:06:50.466 ************************************ 00:06:50.466 19:54:37 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:50.466 [2024-07-13 19:54:37.787050] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:50.466 [2024-07-13 19:54:37.787104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669422 ] 00:06:50.466 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.466 [2024-07-13 19:54:37.845788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.466 [2024-07-13 19:54:37.883008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.466 19:54:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:51.405 19:54:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.405 00:06:51.405 real 0m1.274s 00:06:51.405 user 0m1.170s 00:06:51.405 sys 0m0.117s 00:06:51.405 19:54:39 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.405 19:54:39 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:51.405 ************************************ 00:06:51.405 END TEST accel_compare 00:06:51.405 ************************************ 00:06:51.665 19:54:39 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:51.665 19:54:39 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:51.665 19:54:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.665 19:54:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.665 ************************************ 00:06:51.665 START TEST accel_xor 00:06:51.665 ************************************ 00:06:51.665 19:54:39 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:51.665 [2024-07-13 19:54:39.130661] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:51.665 [2024-07-13 19:54:39.130714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669601 ] 00:06:51.665 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.665 [2024-07-13 19:54:39.194825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.665 [2024-07-13 19:54:39.232821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.665 19:54:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.045 00:06:53.045 real 0m1.281s 00:06:53.045 user 0m1.176s 00:06:53.045 sys 0m0.120s 00:06:53.045 19:54:40 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.045 19:54:40 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:53.045 ************************************ 00:06:53.045 END TEST accel_xor 00:06:53.045 ************************************ 00:06:53.045 19:54:40 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:53.045 19:54:40 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:53.045 19:54:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.045 19:54:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.045 ************************************ 00:06:53.045 START TEST accel_xor 00:06:53.045 ************************************ 00:06:53.045 19:54:40 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:53.045 [2024-07-13 19:54:40.502266] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:53.045 [2024-07-13 19:54:40.502352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669822 ] 00:06:53.045 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.045 [2024-07-13 19:54:40.571770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.045 [2024-07-13 19:54:40.610153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:53.045 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.046 19:54:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:54.423 19:54:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.423 00:06:54.423 real 0m1.296s 00:06:54.423 user 0m1.178s 00:06:54.423 sys 0m0.132s 00:06:54.423 19:54:41 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.423 19:54:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:54.423 ************************************ 00:06:54.423 END TEST accel_xor 00:06:54.423 ************************************ 00:06:54.423 19:54:41 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:54.423 19:54:41 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:54.423 19:54:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.423 19:54:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.423 ************************************ 00:06:54.423 START TEST accel_dif_verify 00:06:54.423 ************************************ 00:06:54.423 19:54:41 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:54.423 19:54:41 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:54.423 19:54:41 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:54.424 19:54:41 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:54.424 19:54:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:54.424 19:54:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:54.424 19:54:41 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.424 19:54:41 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.424 19:54:41 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.424 19:54:41 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.424 19:54:41 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.424 19:54:41 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:54.424 19:54:41 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:54.424 [2024-07-13 19:54:41.859618] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:54.424 [2024-07-13 19:54:41.859672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670038 ] 00:06:54.424 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.424 [2024-07-13 19:54:41.924375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.424 [2024-07-13 19:54:41.961885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.424 19:54:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:55.797 19:54:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.797 00:06:55.797 real 0m1.280s 00:06:55.797 user 0m1.172s 00:06:55.797 sys 0m0.125s 00:06:55.797 19:54:43 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.797 19:54:43 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:55.797 ************************************ 00:06:55.797 END TEST accel_dif_verify 00:06:55.797 ************************************ 00:06:55.797 19:54:43 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:55.797 19:54:43 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:55.797 19:54:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.797 19:54:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.797 ************************************ 00:06:55.797 START TEST accel_dif_generate 00:06:55.797 ************************************ 00:06:55.797 19:54:43 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:55.797 19:54:43 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:55.797 19:54:43 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:55.797 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.797 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.797 19:54:43 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:55.797 19:54:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:55.797 19:54:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:55.797 19:54:43 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.797 19:54:43 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.797 19:54:43 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.797 19:54:43 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.797 19:54:43 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.797 19:54:43 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:55.797 19:54:43 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:55.797 [2024-07-13 19:54:43.229678] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:55.797 [2024-07-13 19:54:43.229761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670313 ] 00:06:55.797 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.797 [2024-07-13 19:54:43.301023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.797 [2024-07-13 19:54:43.341294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.797 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.798 19:54:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:57.176 19:54:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.176 00:06:57.176 real 0m1.300s 00:06:57.176 user 0m1.184s 00:06:57.176 sys 0m0.132s 00:06:57.176 19:54:44 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.176 19:54:44 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:57.176 ************************************ 00:06:57.176 END TEST accel_dif_generate 00:06:57.176 ************************************ 00:06:57.176 19:54:44 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:57.176 19:54:44 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:57.176 19:54:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.176 19:54:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.176 ************************************ 00:06:57.176 START TEST accel_dif_generate_copy 00:06:57.176 ************************************ 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:57.176 [2024-07-13 19:54:44.600193] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:57.176 [2024-07-13 19:54:44.600273] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670601 ] 00:06:57.176 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.176 [2024-07-13 19:54:44.670468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.176 [2024-07-13 19:54:44.707169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.176 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:57.177 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.177 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.177 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.177 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.177 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.177 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.177 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.177 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.177 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.177 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.177 19:54:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.554 00:06:58.554 real 0m1.294s 00:06:58.554 user 0m1.169s 00:06:58.554 sys 0m0.140s 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.554 19:54:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:58.554 ************************************ 00:06:58.554 END TEST accel_dif_generate_copy 00:06:58.554 ************************************ 00:06:58.554 19:54:45 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:58.554 19:54:45 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:58.554 19:54:45 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:58.554 19:54:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.554 19:54:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.554 ************************************ 00:06:58.554 START TEST accel_comp 00:06:58.554 ************************************ 00:06:58.554 19:54:45 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:58.554 19:54:45 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:58.554 19:54:45 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:58.554 19:54:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:45 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:58.554 19:54:45 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:58.554 19:54:45 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:58.554 19:54:45 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.554 19:54:45 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.554 19:54:45 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.554 19:54:45 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.554 19:54:45 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.554 19:54:45 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:58.554 19:54:45 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:58.554 [2024-07-13 19:54:45.963801] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:58.554 [2024-07-13 19:54:45.963879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670883 ] 00:06:58.554 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.554 [2024-07-13 19:54:46.032280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.554 [2024-07-13 19:54:46.069529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.554 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.555 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.555 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.555 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.555 19:54:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.555 19:54:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.555 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.555 19:54:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:59.933 19:54:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.933 00:06:59.933 real 0m1.296s 00:06:59.933 user 0m1.174s 00:06:59.933 sys 0m0.137s 00:06:59.933 19:54:47 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.933 19:54:47 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:59.933 ************************************ 00:06:59.933 END TEST accel_comp 00:06:59.933 ************************************ 00:06:59.933 19:54:47 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:59.933 19:54:47 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:59.933 19:54:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.933 19:54:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.933 ************************************ 00:06:59.933 START TEST accel_decomp 00:06:59.933 ************************************ 00:06:59.933 19:54:47 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:59.933 [2024-07-13 19:54:47.316147] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:59.933 [2024-07-13 19:54:47.316191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671169 ] 00:06:59.933 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.933 [2024-07-13 19:54:47.378490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.933 [2024-07-13 19:54:47.415759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.933 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.934 19:54:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.311 19:54:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.312 19:54:48 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.312 00:07:01.312 real 0m1.276s 00:07:01.312 user 0m1.168s 00:07:01.312 sys 0m0.123s 00:07:01.312 19:54:48 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.312 19:54:48 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:01.312 ************************************ 00:07:01.312 END TEST accel_decomp 00:07:01.312 ************************************ 00:07:01.312 19:54:48 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.312 19:54:48 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:01.312 19:54:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.312 19:54:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.312 ************************************ 00:07:01.312 START TEST accel_decmop_full 00:07:01.312 ************************************ 00:07:01.312 19:54:48 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:01.312 [2024-07-13 19:54:48.673044] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:01.312 [2024-07-13 19:54:48.673119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671455 ] 00:07:01.312 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.312 [2024-07-13 19:54:48.741742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.312 [2024-07-13 19:54:48.778961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.312 19:54:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.689 19:54:49 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.689 00:07:02.689 real 0m1.307s 00:07:02.689 user 0m1.192s 00:07:02.689 sys 0m0.130s 00:07:02.689 19:54:49 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.689 19:54:49 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:02.689 ************************************ 00:07:02.690 END TEST accel_decmop_full 00:07:02.690 ************************************ 00:07:02.690 19:54:49 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.690 19:54:49 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:02.690 19:54:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.690 19:54:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.690 ************************************ 00:07:02.690 START TEST accel_decomp_mcore 00:07:02.690 ************************************ 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:02.690 [2024-07-13 19:54:50.052558] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:02.690 [2024-07-13 19:54:50.052642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671736 ] 00:07:02.690 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.690 [2024-07-13 19:54:50.122203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.690 [2024-07-13 19:54:50.163188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.690 [2024-07-13 19:54:50.163284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.690 [2024-07-13 19:54:50.163344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.690 [2024-07-13 19:54:50.163346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.690 19:54:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.071 00:07:04.071 real 0m1.312s 00:07:04.071 user 0m4.510s 00:07:04.071 sys 0m0.145s 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.071 19:54:51 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:04.071 ************************************ 00:07:04.071 END TEST accel_decomp_mcore 00:07:04.071 ************************************ 00:07:04.071 19:54:51 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.071 19:54:51 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:04.071 19:54:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.071 19:54:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.071 ************************************ 00:07:04.071 START TEST accel_decomp_full_mcore 00:07:04.071 ************************************ 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:04.071 [2024-07-13 19:54:51.411116] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:04.071 [2024-07-13 19:54:51.411158] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671969 ] 00:07:04.071 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.071 [2024-07-13 19:54:51.474557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.071 [2024-07-13 19:54:51.515656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.071 [2024-07-13 19:54:51.515750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.071 [2024-07-13 19:54:51.515849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.071 [2024-07-13 19:54:51.515852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.071 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.072 19:54:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.452 00:07:05.452 real 0m1.302s 00:07:05.452 user 0m4.534s 00:07:05.452 sys 0m0.134s 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.452 19:54:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:05.452 ************************************ 00:07:05.452 END TEST accel_decomp_full_mcore 00:07:05.452 ************************************ 00:07:05.452 19:54:52 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:05.452 19:54:52 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:05.452 19:54:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.452 19:54:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.452 ************************************ 00:07:05.452 START TEST accel_decomp_mthread 00:07:05.452 ************************************ 00:07:05.452 19:54:52 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:05.452 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:05.452 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:05.452 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:05.452 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.452 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.452 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:05.452 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:05.452 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.452 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.452 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.452 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.452 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.452 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:05.452 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:05.452 [2024-07-13 19:54:52.790220] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:05.453 [2024-07-13 19:54:52.790281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672177 ] 00:07:05.453 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.453 [2024-07-13 19:54:52.851657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.453 [2024-07-13 19:54:52.889262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.453 19:54:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.830 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.830 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.830 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.830 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.830 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.830 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.830 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.830 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.830 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.830 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.830 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.830 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.830 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.831 00:07:06.831 real 0m1.283s 00:07:06.831 user 0m1.178s 00:07:06.831 sys 0m0.122s 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.831 19:54:54 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:06.831 ************************************ 00:07:06.831 END TEST accel_decomp_mthread 00:07:06.831 ************************************ 00:07:06.831 19:54:54 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.831 19:54:54 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:06.831 19:54:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.831 19:54:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.831 ************************************ 00:07:06.831 START TEST accel_decomp_full_mthread 00:07:06.831 ************************************ 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:06.831 [2024-07-13 19:54:54.143135] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:06.831 [2024-07-13 19:54:54.143183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672364 ] 00:07:06.831 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.831 [2024-07-13 19:54:54.208183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.831 [2024-07-13 19:54:54.245748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.831 19:54:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.767 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.767 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.767 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.027 00:07:08.027 real 0m1.303s 00:07:08.027 user 0m1.190s 00:07:08.027 sys 0m0.129s 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.027 19:54:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:08.027 ************************************ 00:07:08.027 END TEST accel_decomp_full_mthread 00:07:08.027 ************************************ 00:07:08.027 19:54:55 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:08.027 19:54:55 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:08.027 19:54:55 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:08.027 19:54:55 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:08.027 19:54:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.027 19:54:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.027 19:54:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.027 19:54:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.027 19:54:55 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.027 19:54:55 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.027 19:54:55 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.027 19:54:55 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:08.027 19:54:55 accel -- accel/accel.sh@41 -- # jq -r . 00:07:08.027 ************************************ 00:07:08.027 START TEST accel_dif_functional_tests 00:07:08.027 ************************************ 00:07:08.027 19:54:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:08.027 [2024-07-13 19:54:55.516845] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:08.027 [2024-07-13 19:54:55.516886] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672629 ] 00:07:08.027 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.027 [2024-07-13 19:54:55.580956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.027 [2024-07-13 19:54:55.620309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.027 [2024-07-13 19:54:55.620405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.027 [2024-07-13 19:54:55.620405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.027 00:07:08.027 00:07:08.027 CUnit - A unit testing framework for C - Version 2.1-3 00:07:08.027 http://cunit.sourceforge.net/ 00:07:08.027 00:07:08.027 00:07:08.027 Suite: accel_dif 00:07:08.027 Test: verify: DIF generated, GUARD check ...passed 00:07:08.027 Test: verify: DIF generated, APPTAG check ...passed 00:07:08.027 Test: verify: DIF generated, REFTAG check ...passed 00:07:08.027 Test: verify: DIF not generated, GUARD check ...[2024-07-13 19:54:55.683594] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:08.027 passed 00:07:08.027 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 19:54:55.683646] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:08.027 passed 00:07:08.027 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 19:54:55.683676] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:08.027 passed 00:07:08.027 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:08.027 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 19:54:55.683725] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:08.027 passed 00:07:08.027 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:08.027 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:08.027 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:08.027 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 19:54:55.683823] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:08.027 passed 00:07:08.027 Test: verify copy: DIF generated, GUARD check ...passed 00:07:08.027 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:08.027 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:08.027 Test: verify copy: DIF not generated, GUARD check ...[2024-07-13 19:54:55.683931] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:08.027 passed 00:07:08.027 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 19:54:55.683958] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:08.027 passed 00:07:08.027 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-13 19:54:55.683985] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:08.027 passed 00:07:08.027 Test: generate copy: DIF generated, GUARD check ...passed 00:07:08.027 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:08.027 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:08.027 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:08.027 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:08.027 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:08.027 Test: generate copy: iovecs-len validate ...[2024-07-13 19:54:55.684157] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:08.027 passed 00:07:08.027 Test: generate copy: buffer alignment validate ...passed 00:07:08.027 00:07:08.027 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.027 suites 1 1 n/a 0 0 00:07:08.027 tests 26 26 26 0 0 00:07:08.027 asserts 115 115 115 0 n/a 00:07:08.027 00:07:08.027 Elapsed time = 0.002 seconds 00:07:08.287 00:07:08.287 real 0m0.332s 00:07:08.287 user 0m0.527s 00:07:08.287 sys 0m0.149s 00:07:08.287 19:54:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.287 19:54:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:08.287 ************************************ 00:07:08.287 END TEST accel_dif_functional_tests 00:07:08.287 ************************************ 00:07:08.287 00:07:08.287 real 0m29.396s 00:07:08.287 user 0m32.570s 00:07:08.287 sys 0m4.700s 00:07:08.287 19:54:55 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.287 19:54:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.287 ************************************ 00:07:08.287 END TEST accel 00:07:08.287 ************************************ 00:07:08.287 19:54:55 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:08.287 19:54:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:08.287 19:54:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.287 19:54:55 -- common/autotest_common.sh@10 -- # set +x 00:07:08.546 ************************************ 00:07:08.546 START TEST accel_rpc 00:07:08.546 ************************************ 00:07:08.546 19:54:55 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:08.546 * Looking for test storage... 00:07:08.546 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:07:08.546 19:54:56 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:08.546 19:54:56 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:08.546 19:54:56 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3672902 00:07:08.546 19:54:56 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3672902 00:07:08.546 19:54:56 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3672902 ']' 00:07:08.546 19:54:56 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.546 19:54:56 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:08.546 19:54:56 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.546 19:54:56 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:08.546 19:54:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.547 [2024-07-13 19:54:56.069689] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:08.547 [2024-07-13 19:54:56.069736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672902 ] 00:07:08.547 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.547 [2024-07-13 19:54:56.132752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.547 [2024-07-13 19:54:56.173007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.806 19:54:56 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:08.806 19:54:56 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:08.806 19:54:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:08.806 19:54:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:08.806 19:54:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:08.806 19:54:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:08.806 19:54:56 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:08.806 19:54:56 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:08.806 19:54:56 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.806 19:54:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.806 ************************************ 00:07:08.806 START TEST accel_assign_opcode 00:07:08.806 ************************************ 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:08.806 [2024-07-13 19:54:56.261559] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:08.806 [2024-07-13 19:54:56.269571] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:08.806 19:54:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.066 software 00:07:09.066 00:07:09.066 real 0m0.219s 00:07:09.066 user 0m0.037s 00:07:09.066 sys 0m0.010s 00:07:09.066 19:54:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.066 19:54:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:09.066 ************************************ 00:07:09.066 END TEST accel_assign_opcode 00:07:09.066 ************************************ 00:07:09.066 19:54:56 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3672902 00:07:09.066 19:54:56 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3672902 ']' 00:07:09.066 19:54:56 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3672902 00:07:09.066 19:54:56 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:09.066 19:54:56 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:09.066 19:54:56 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3672902 00:07:09.066 19:54:56 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:09.066 19:54:56 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:09.066 19:54:56 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3672902' 00:07:09.067 killing process with pid 3672902 00:07:09.067 19:54:56 accel_rpc -- common/autotest_common.sh@965 -- # kill 3672902 00:07:09.067 19:54:56 accel_rpc -- common/autotest_common.sh@970 -- # wait 3672902 00:07:09.326 00:07:09.326 real 0m0.898s 00:07:09.326 user 0m0.804s 00:07:09.326 sys 0m0.424s 00:07:09.326 19:54:56 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.326 19:54:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.326 ************************************ 00:07:09.326 END TEST accel_rpc 00:07:09.326 ************************************ 00:07:09.326 19:54:56 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:09.326 19:54:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:09.326 19:54:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.326 19:54:56 -- common/autotest_common.sh@10 -- # set +x 00:07:09.326 ************************************ 00:07:09.326 START TEST app_cmdline 00:07:09.327 ************************************ 00:07:09.327 19:54:56 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:09.587 * Looking for test storage... 00:07:09.587 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:09.587 19:54:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:09.587 19:54:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3673031 00:07:09.587 19:54:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3673031 00:07:09.587 19:54:57 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:09.587 19:54:57 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3673031 ']' 00:07:09.587 19:54:57 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.587 19:54:57 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:09.587 19:54:57 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.587 19:54:57 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:09.587 19:54:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:09.587 [2024-07-13 19:54:57.052280] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:09.587 [2024-07-13 19:54:57.052343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673031 ] 00:07:09.587 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.587 [2024-07-13 19:54:57.118520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.587 [2024-07-13 19:54:57.159713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.846 19:54:57 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:09.846 19:54:57 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:09.846 19:54:57 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:09.846 { 00:07:09.846 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:07:09.846 "fields": { 00:07:09.846 "major": 24, 00:07:09.846 "minor": 5, 00:07:09.846 "patch": 1, 00:07:09.846 "suffix": "-pre", 00:07:09.846 "commit": "5fa2f5086" 00:07:09.846 } 00:07:09.846 } 00:07:09.846 19:54:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:09.846 19:54:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:09.846 19:54:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:09.846 19:54:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:09.846 19:54:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:09.846 19:54:57 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.846 19:54:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:09.846 19:54:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:09.846 19:54:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.106 19:54:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:10.106 19:54:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:10.106 19:54:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:10.106 request: 00:07:10.106 { 00:07:10.106 "method": "env_dpdk_get_mem_stats", 00:07:10.106 "req_id": 1 00:07:10.106 } 00:07:10.106 Got JSON-RPC error response 00:07:10.106 response: 00:07:10.106 { 00:07:10.106 "code": -32601, 00:07:10.106 "message": "Method not found" 00:07:10.106 } 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:10.106 19:54:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3673031 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3673031 ']' 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3673031 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3673031 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3673031' 00:07:10.106 killing process with pid 3673031 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@965 -- # kill 3673031 00:07:10.106 19:54:57 app_cmdline -- common/autotest_common.sh@970 -- # wait 3673031 00:07:10.761 00:07:10.761 real 0m1.106s 00:07:10.761 user 0m1.220s 00:07:10.761 sys 0m0.435s 00:07:10.761 19:54:58 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.761 19:54:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.761 ************************************ 00:07:10.761 END TEST app_cmdline 00:07:10.761 ************************************ 00:07:10.761 19:54:58 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:10.761 19:54:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:10.761 19:54:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.761 19:54:58 -- common/autotest_common.sh@10 -- # set +x 00:07:10.761 ************************************ 00:07:10.761 START TEST version 00:07:10.761 ************************************ 00:07:10.761 19:54:58 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:10.761 * Looking for test storage... 00:07:10.761 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:10.761 19:54:58 version -- app/version.sh@17 -- # get_header_version major 00:07:10.761 19:54:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:10.761 19:54:58 version -- app/version.sh@14 -- # cut -f2 00:07:10.761 19:54:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.761 19:54:58 version -- app/version.sh@17 -- # major=24 00:07:10.761 19:54:58 version -- app/version.sh@18 -- # get_header_version minor 00:07:10.761 19:54:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:10.761 19:54:58 version -- app/version.sh@14 -- # cut -f2 00:07:10.761 19:54:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.761 19:54:58 version -- app/version.sh@18 -- # minor=5 00:07:10.761 19:54:58 version -- app/version.sh@19 -- # get_header_version patch 00:07:10.761 19:54:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:10.761 19:54:58 version -- app/version.sh@14 -- # cut -f2 00:07:10.761 19:54:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.761 19:54:58 version -- app/version.sh@19 -- # patch=1 00:07:10.761 19:54:58 version -- app/version.sh@20 -- # get_header_version suffix 00:07:10.761 19:54:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:10.761 19:54:58 version -- app/version.sh@14 -- # cut -f2 00:07:10.761 19:54:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:10.761 19:54:58 version -- app/version.sh@20 -- # suffix=-pre 00:07:10.761 19:54:58 version -- app/version.sh@22 -- # version=24.5 00:07:10.761 19:54:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:10.761 19:54:58 version -- app/version.sh@25 -- # version=24.5.1 00:07:10.761 19:54:58 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:10.761 19:54:58 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:10.761 19:54:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:10.761 19:54:58 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:10.761 19:54:58 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:10.761 00:07:10.761 real 0m0.177s 00:07:10.761 user 0m0.089s 00:07:10.761 sys 0m0.132s 00:07:10.761 19:54:58 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.761 19:54:58 version -- common/autotest_common.sh@10 -- # set +x 00:07:10.761 ************************************ 00:07:10.761 END TEST version 00:07:10.761 ************************************ 00:07:10.761 19:54:58 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@198 -- # uname -s 00:07:10.762 19:54:58 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:10.762 19:54:58 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:10.762 19:54:58 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:10.762 19:54:58 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:10.762 19:54:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:10.762 19:54:58 -- common/autotest_common.sh@10 -- # set +x 00:07:10.762 19:54:58 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:07:10.762 19:54:58 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:07:10.762 19:54:58 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:07:10.762 19:54:58 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:07:10.762 19:54:58 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:10.762 19:54:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:10.762 19:54:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.762 19:54:58 -- common/autotest_common.sh@10 -- # set +x 00:07:11.021 ************************************ 00:07:11.021 START TEST llvm_fuzz 00:07:11.021 ************************************ 00:07:11.021 19:54:58 llvm_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:11.021 * Looking for test storage... 00:07:11.021 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:11.021 19:54:58 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:11.021 19:54:58 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:11.021 19:54:58 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:07:11.021 19:54:58 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:07:11.021 19:54:58 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:07:11.021 19:54:58 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:11.021 19:54:58 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:11.021 19:54:58 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:11.021 19:54:58 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:11.021 19:54:58 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:07:11.021 19:54:58 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:07:11.021 19:54:58 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:11.021 19:54:58 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:11.021 19:54:58 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:11.021 19:54:58 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:11.021 19:54:58 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:11.021 19:54:58 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:11.021 19:54:58 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:11.021 19:54:58 llvm_fuzz -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:11.021 19:54:58 llvm_fuzz -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.021 19:54:58 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:11.021 ************************************ 00:07:11.021 START TEST nvmf_fuzz 00:07:11.021 ************************************ 00:07:11.021 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:11.021 * Looking for test storage... 00:07:11.285 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:11.285 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:11.286 #define SPDK_CONFIG_H 00:07:11.286 #define SPDK_CONFIG_APPS 1 00:07:11.286 #define SPDK_CONFIG_ARCH native 00:07:11.286 #undef SPDK_CONFIG_ASAN 00:07:11.286 #undef SPDK_CONFIG_AVAHI 00:07:11.286 #undef SPDK_CONFIG_CET 00:07:11.286 #define SPDK_CONFIG_COVERAGE 1 00:07:11.286 #define SPDK_CONFIG_CROSS_PREFIX 00:07:11.286 #undef SPDK_CONFIG_CRYPTO 00:07:11.286 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:11.286 #undef SPDK_CONFIG_CUSTOMOCF 00:07:11.286 #undef SPDK_CONFIG_DAOS 00:07:11.286 #define SPDK_CONFIG_DAOS_DIR 00:07:11.286 #define SPDK_CONFIG_DEBUG 1 00:07:11.286 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:11.286 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:07:11.286 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:07:11.286 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:11.286 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:11.286 #undef SPDK_CONFIG_DPDK_UADK 00:07:11.286 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:11.286 #define SPDK_CONFIG_EXAMPLES 1 00:07:11.286 #undef SPDK_CONFIG_FC 00:07:11.286 #define SPDK_CONFIG_FC_PATH 00:07:11.286 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:11.286 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:11.286 #undef SPDK_CONFIG_FUSE 00:07:11.286 #define SPDK_CONFIG_FUZZER 1 00:07:11.286 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:11.286 #undef SPDK_CONFIG_GOLANG 00:07:11.286 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:11.286 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:11.286 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:11.286 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:11.286 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:11.286 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:11.286 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:11.286 #define SPDK_CONFIG_IDXD 1 00:07:11.286 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:11.286 #undef SPDK_CONFIG_IPSEC_MB 00:07:11.286 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:11.286 #define SPDK_CONFIG_ISAL 1 00:07:11.286 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:11.286 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:11.286 #define SPDK_CONFIG_LIBDIR 00:07:11.286 #undef SPDK_CONFIG_LTO 00:07:11.286 #define SPDK_CONFIG_MAX_LCORES 00:07:11.286 #define SPDK_CONFIG_NVME_CUSE 1 00:07:11.286 #undef SPDK_CONFIG_OCF 00:07:11.286 #define SPDK_CONFIG_OCF_PATH 00:07:11.286 #define SPDK_CONFIG_OPENSSL_PATH 00:07:11.286 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:11.286 #define SPDK_CONFIG_PGO_DIR 00:07:11.286 #undef SPDK_CONFIG_PGO_USE 00:07:11.286 #define SPDK_CONFIG_PREFIX /usr/local 00:07:11.286 #undef SPDK_CONFIG_RAID5F 00:07:11.286 #undef SPDK_CONFIG_RBD 00:07:11.286 #define SPDK_CONFIG_RDMA 1 00:07:11.286 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:11.286 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:11.286 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:11.286 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:11.286 #undef SPDK_CONFIG_SHARED 00:07:11.286 #undef SPDK_CONFIG_SMA 00:07:11.286 #define SPDK_CONFIG_TESTS 1 00:07:11.286 #undef SPDK_CONFIG_TSAN 00:07:11.286 #define SPDK_CONFIG_UBLK 1 00:07:11.286 #define SPDK_CONFIG_UBSAN 1 00:07:11.286 #undef SPDK_CONFIG_UNIT_TESTS 00:07:11.286 #undef SPDK_CONFIG_URING 00:07:11.286 #define SPDK_CONFIG_URING_PATH 00:07:11.286 #undef SPDK_CONFIG_URING_ZNS 00:07:11.286 #undef SPDK_CONFIG_USDT 00:07:11.286 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:11.286 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:11.286 #define SPDK_CONFIG_VFIO_USER 1 00:07:11.286 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:11.286 #define SPDK_CONFIG_VHOST 1 00:07:11.286 #define SPDK_CONFIG_VIRTIO 1 00:07:11.286 #undef SPDK_CONFIG_VTUNE 00:07:11.286 #define SPDK_CONFIG_VTUNE_DIR 00:07:11.286 #define SPDK_CONFIG_WERROR 1 00:07:11.286 #define SPDK_CONFIG_WPDK_DIR 00:07:11.286 #undef SPDK_CONFIG_XNVME 00:07:11.286 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # uname -s 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@57 -- # : 1 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@61 -- # : 0 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@63 -- # : 0 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@65 -- # : 1 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@67 -- # : 0 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@69 -- # : 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@71 -- # : 0 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@73 -- # : 0 00:07:11.286 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@75 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@77 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@79 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@81 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@83 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@85 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@87 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@89 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@91 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@93 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@95 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@97 -- # : 1 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@99 -- # : 1 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@101 -- # : rdma 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@103 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@105 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@107 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@109 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@111 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@113 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@115 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@117 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@119 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@121 -- # : 1 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@125 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@127 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@129 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@131 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@133 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@135 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@137 -- # : v22.11.4 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@139 -- # : true 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@141 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@143 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@145 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@147 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@149 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@151 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@153 -- # : 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@155 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@157 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@159 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@161 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@163 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@166 -- # : 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@168 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@170 -- # : 0 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:11.287 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@199 -- # cat 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # export valgrind= 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # valgrind= 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@268 -- # uname -s 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@278 -- # MAKE=make 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j112 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@317 -- # [[ -z 3673458 ]] 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@317 -- # kill -0 3673458 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.ZoUPTc 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.ZoUPTc/tests/nvmf /tmp/spdk.ZoUPTc 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@326 -- # df -T 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=954408960 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4330020864 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=53036584960 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742317568 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=8705732608 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30866448384 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871158784 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=12342484992 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348465152 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=5980160 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30870544384 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871158784 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=614400 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=6174224384 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174228480 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:11.288 * Looking for test storage... 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:11.288 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # mount=/ 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@373 -- # target_space=53036584960 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # new_size=10920325120 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:11.289 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@388 -- # return 0 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1683 -- # true 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@8 -- # pids=() 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@70 -- # local time=1 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4400 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:11.289 19:54:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:07:11.289 [2024-07-13 19:54:58.830302] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:11.289 [2024-07-13 19:54:58.830355] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673536 ] 00:07:11.289 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.549 [2024-07-13 19:54:59.006698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.549 [2024-07-13 19:54:59.028826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.549 [2024-07-13 19:54:59.081126] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.549 [2024-07-13 19:54:59.097452] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:11.549 INFO: Running with entropic power schedule (0xFF, 100). 00:07:11.549 INFO: Seed: 2115984687 00:07:11.549 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:11.549 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:11.549 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:11.549 INFO: A corpus is not provided, starting from an empty corpus 00:07:11.549 #2 INITED exec/s: 0 rss: 63Mb 00:07:11.549 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:11.549 This may also happen if the target rejected all inputs we tried so far 00:07:11.549 [2024-07-13 19:54:59.163363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.549 [2024-07-13 19:54:59.163397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.808 NEW_FUNC[1/692]: 0x4939b0 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:11.808 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:11.808 #14 NEW cov: 11798 ft: 11799 corp: 2/95b lim: 320 exec/s: 0 rss: 69Mb L: 94/94 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:12.067 [2024-07-13 19:54:59.494153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x800000 00:07:12.067 [2024-07-13 19:54:59.494190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.067 #15 NEW cov: 11928 ft: 12442 corp: 3/189b lim: 320 exec/s: 0 rss: 69Mb L: 94/94 MS: 1 ChangeBit- 00:07:12.067 [2024-07-13 19:54:59.544225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.067 [2024-07-13 19:54:59.544253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.067 #16 NEW cov: 11934 ft: 12713 corp: 4/284b lim: 320 exec/s: 0 rss: 69Mb L: 95/95 MS: 1 CrossOver- 00:07:12.067 [2024-07-13 19:54:59.584769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.067 [2024-07-13 19:54:59.584796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.067 [2024-07-13 19:54:59.584899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:88888888 cdw11:88888888 00:07:12.067 [2024-07-13 19:54:59.584917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.067 [2024-07-13 19:54:59.585048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (88) qid:0 cid:6 nsid:88888888 cdw10:88888888 cdw11:88888888 00:07:12.067 [2024-07-13 19:54:59.585064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.067 #17 NEW cov: 12040 ft: 13236 corp: 5/482b lim: 320 exec/s: 0 rss: 69Mb L: 198/198 MS: 1 InsertRepeatedBytes- 00:07:12.067 [2024-07-13 19:54:59.624531] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.067 [2024-07-13 19:54:59.624557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.067 #18 NEW cov: 12057 ft: 13376 corp: 6/577b lim: 320 exec/s: 0 rss: 70Mb L: 95/198 MS: 1 CrossOver- 00:07:12.067 [2024-07-13 19:54:59.674645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.067 [2024-07-13 19:54:59.674671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.067 #19 NEW cov: 12057 ft: 13440 corp: 7/671b lim: 320 exec/s: 0 rss: 70Mb L: 94/198 MS: 1 ChangeBinInt- 00:07:12.067 [2024-07-13 19:54:59.714809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:80000000 cdw10:00000000 cdw11:00000000 00:07:12.067 [2024-07-13 19:54:59.714835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.326 #20 NEW cov: 12057 ft: 13512 corp: 8/743b lim: 320 exec/s: 0 rss: 70Mb L: 72/198 MS: 1 CrossOver- 00:07:12.326 [2024-07-13 19:54:59.754901] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.326 [2024-07-13 19:54:59.754926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.326 #21 NEW cov: 12057 ft: 13552 corp: 9/826b lim: 320 exec/s: 0 rss: 70Mb L: 83/198 MS: 1 EraseBytes- 00:07:12.326 [2024-07-13 19:54:59.804816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.326 [2024-07-13 19:54:59.804843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.326 #22 NEW cov: 12057 ft: 13577 corp: 10/920b lim: 320 exec/s: 0 rss: 70Mb L: 94/198 MS: 1 ChangeBinInt- 00:07:12.326 [2024-07-13 19:54:59.844817] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.326 [2024-07-13 19:54:59.844846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.326 #23 NEW cov: 12057 ft: 13704 corp: 11/1003b lim: 320 exec/s: 0 rss: 70Mb L: 83/198 MS: 1 ShuffleBytes- 00:07:12.326 [2024-07-13 19:54:59.895406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (75) qid:0 cid:4 nsid:75757575 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:12.326 [2024-07-13 19:54:59.895433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.326 NEW_FUNC[1/1]: 0x17c4780 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:07:12.326 #24 NEW cov: 12070 ft: 14062 corp: 12/1118b lim: 320 exec/s: 0 rss: 70Mb L: 115/198 MS: 1 InsertRepeatedBytes- 00:07:12.326 [2024-07-13 19:54:59.945439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x800000 00:07:12.326 [2024-07-13 19:54:59.945470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.326 #25 NEW cov: 12070 ft: 14077 corp: 13/1217b lim: 320 exec/s: 0 rss: 70Mb L: 99/198 MS: 1 CopyPart- 00:07:12.585 [2024-07-13 19:55:00.005839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.585 [2024-07-13 19:55:00.005868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.585 [2024-07-13 19:55:00.005988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.585 [2024-07-13 19:55:00.006004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.585 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:12.585 #41 NEW cov: 12093 ft: 14242 corp: 14/1379b lim: 320 exec/s: 0 rss: 70Mb L: 162/198 MS: 1 CopyPart- 00:07:12.585 [2024-07-13 19:55:00.055397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x800000 00:07:12.585 [2024-07-13 19:55:00.055425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.585 #42 NEW cov: 12093 ft: 14246 corp: 15/1478b lim: 320 exec/s: 0 rss: 70Mb L: 99/198 MS: 1 ShuffleBytes- 00:07:12.585 [2024-07-13 19:55:00.095537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.585 [2024-07-13 19:55:00.095564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.585 #43 NEW cov: 12093 ft: 14288 corp: 16/1572b lim: 320 exec/s: 0 rss: 70Mb L: 94/198 MS: 1 CrossOver- 00:07:12.585 [2024-07-13 19:55:00.146433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:0aa70000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.585 [2024-07-13 19:55:00.146462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.585 [2024-07-13 19:55:00.146604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.585 [2024-07-13 19:55:00.146620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.586 [2024-07-13 19:55:00.146739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.586 [2024-07-13 19:55:00.146755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.586 #44 NEW cov: 12093 ft: 14322 corp: 17/1793b lim: 320 exec/s: 44 rss: 70Mb L: 221/221 MS: 1 CrossOver- 00:07:12.586 [2024-07-13 19:55:00.196240] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.586 [2024-07-13 19:55:00.196265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.586 #45 NEW cov: 12093 ft: 14346 corp: 18/1876b lim: 320 exec/s: 45 rss: 70Mb L: 83/221 MS: 1 ChangeBit- 00:07:12.586 [2024-07-13 19:55:00.236230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x800000 00:07:12.586 [2024-07-13 19:55:00.236254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.844 #46 NEW cov: 12093 ft: 14372 corp: 19/1975b lim: 320 exec/s: 46 rss: 70Mb L: 99/221 MS: 1 ShuffleBytes- 00:07:12.844 [2024-07-13 19:55:00.276875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffff800000 00:07:12.845 [2024-07-13 19:55:00.276901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.845 [2024-07-13 19:55:00.277039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:12.845 [2024-07-13 19:55:00.277056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.845 [2024-07-13 19:55:00.277187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.845 [2024-07-13 19:55:00.277203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:12.845 #47 NEW cov: 12093 ft: 14446 corp: 20/2198b lim: 320 exec/s: 47 rss: 70Mb L: 223/223 MS: 1 InsertRepeatedBytes- 00:07:12.845 [2024-07-13 19:55:00.316059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x800000 00:07:12.845 [2024-07-13 19:55:00.316087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.845 #48 NEW cov: 12093 ft: 14504 corp: 21/2297b lim: 320 exec/s: 48 rss: 70Mb L: 99/223 MS: 1 ChangeBinInt- 00:07:12.845 [2024-07-13 19:55:00.366879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.845 [2024-07-13 19:55:00.366909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.845 [2024-07-13 19:55:00.367026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.845 [2024-07-13 19:55:00.367043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.845 #49 NEW cov: 12093 ft: 14517 corp: 22/2433b lim: 320 exec/s: 49 rss: 70Mb L: 136/223 MS: 1 CopyPart- 00:07:12.845 [2024-07-13 19:55:00.416767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:88888800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.845 [2024-07-13 19:55:00.416793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.845 #50 NEW cov: 12093 ft: 14560 corp: 23/2551b lim: 320 exec/s: 50 rss: 70Mb L: 118/223 MS: 1 EraseBytes- 00:07:12.845 [2024-07-13 19:55:00.467025] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.845 [2024-07-13 19:55:00.467052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.845 #51 NEW cov: 12093 ft: 14635 corp: 24/2634b lim: 320 exec/s: 51 rss: 71Mb L: 83/223 MS: 1 ChangeBinInt- 00:07:13.118 [2024-07-13 19:55:00.517148] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.118 [2024-07-13 19:55:00.517176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.118 #52 NEW cov: 12093 ft: 14695 corp: 25/2717b lim: 320 exec/s: 52 rss: 71Mb L: 83/223 MS: 1 CrossOver- 00:07:13.118 [2024-07-13 19:55:00.556972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:88888800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.118 [2024-07-13 19:55:00.556999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.118 #53 NEW cov: 12093 ft: 14712 corp: 26/2836b lim: 320 exec/s: 53 rss: 71Mb L: 119/223 MS: 1 InsertByte- 00:07:13.118 [2024-07-13 19:55:00.607348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x800000 00:07:13.118 [2024-07-13 19:55:00.607374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.118 #54 NEW cov: 12093 ft: 14729 corp: 27/2930b lim: 320 exec/s: 54 rss: 71Mb L: 94/223 MS: 1 ChangeByte- 00:07:13.118 [2024-07-13 19:55:00.647489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:88888800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.118 [2024-07-13 19:55:00.647516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.118 #55 NEW cov: 12093 ft: 14737 corp: 28/3049b lim: 320 exec/s: 55 rss: 71Mb L: 119/223 MS: 1 ShuffleBytes- 00:07:13.118 [2024-07-13 19:55:00.697563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:88888800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.118 [2024-07-13 19:55:00.697588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.118 #56 NEW cov: 12093 ft: 14739 corp: 29/3169b lim: 320 exec/s: 56 rss: 71Mb L: 120/223 MS: 1 InsertByte- 00:07:13.119 [2024-07-13 19:55:00.738004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.119 [2024-07-13 19:55:00.738030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.119 [2024-07-13 19:55:00.738145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.119 [2024-07-13 19:55:00.738162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.119 #57 NEW cov: 12093 ft: 14748 corp: 30/3347b lim: 320 exec/s: 57 rss: 71Mb L: 178/223 MS: 1 CrossOver- 00:07:13.119 [2024-07-13 19:55:00.777975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.119 [2024-07-13 19:55:00.778002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.378 #58 NEW cov: 12093 ft: 14760 corp: 31/3442b lim: 320 exec/s: 58 rss: 71Mb L: 95/223 MS: 1 InsertByte- 00:07:13.378 [2024-07-13 19:55:00.818083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.378 [2024-07-13 19:55:00.818107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.378 #59 NEW cov: 12093 ft: 14829 corp: 32/3538b lim: 320 exec/s: 59 rss: 71Mb L: 96/223 MS: 1 InsertByte- 00:07:13.378 [2024-07-13 19:55:00.868171] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.378 [2024-07-13 19:55:00.868198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.378 #60 NEW cov: 12093 ft: 14866 corp: 33/3621b lim: 320 exec/s: 60 rss: 71Mb L: 83/223 MS: 1 ShuffleBytes- 00:07:13.378 [2024-07-13 19:55:00.918281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x50000 00:07:13.378 [2024-07-13 19:55:00.918309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.378 #61 NEW cov: 12093 ft: 14875 corp: 34/3716b lim: 320 exec/s: 61 rss: 71Mb L: 95/223 MS: 1 CopyPart- 00:07:13.378 [2024-07-13 19:55:00.958778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.378 [2024-07-13 19:55:00.958805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.378 [2024-07-13 19:55:00.958935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.378 [2024-07-13 19:55:00.958951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.378 [2024-07-13 19:55:00.959062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.378 [2024-07-13 19:55:00.959079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.378 #62 NEW cov: 12093 ft: 14887 corp: 35/3912b lim: 320 exec/s: 62 rss: 72Mb L: 196/223 MS: 1 InsertRepeatedBytes- 00:07:13.378 [2024-07-13 19:55:01.008930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:0aa70000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.378 [2024-07-13 19:55:01.008955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.378 [2024-07-13 19:55:01.009068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.378 [2024-07-13 19:55:01.009083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.378 [2024-07-13 19:55:01.009196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:13.378 [2024-07-13 19:55:01.009214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.638 #63 NEW cov: 12093 ft: 14893 corp: 36/4133b lim: 320 exec/s: 63 rss: 72Mb L: 221/223 MS: 1 ShuffleBytes- 00:07:13.638 [2024-07-13 19:55:01.058626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.638 [2024-07-13 19:55:01.058653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.638 #64 NEW cov: 12093 ft: 14914 corp: 37/4230b lim: 320 exec/s: 64 rss: 72Mb L: 97/223 MS: 1 CrossOver- 00:07:13.638 [2024-07-13 19:55:01.098767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (a7) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.638 [2024-07-13 19:55:01.098793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.638 #65 NEW cov: 12093 ft: 14926 corp: 38/4310b lim: 320 exec/s: 65 rss: 72Mb L: 80/223 MS: 1 EraseBytes- 00:07:13.638 [2024-07-13 19:55:01.139242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (75) qid:0 cid:4 nsid:75757575 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.638 [2024-07-13 19:55:01.139268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.638 [2024-07-13 19:55:01.139378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:ffffffff 00:07:13.638 [2024-07-13 19:55:01.139395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.638 #66 NEW cov: 12093 ft: 14934 corp: 39/4439b lim: 320 exec/s: 33 rss: 72Mb L: 129/223 MS: 1 InsertRepeatedBytes- 00:07:13.638 #66 DONE cov: 12093 ft: 14934 corp: 39/4439b lim: 320 exec/s: 33 rss: 72Mb 00:07:13.638 Done 66 runs in 2 second(s) 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4401 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:13.638 19:55:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:07:13.897 [2024-07-13 19:55:01.307382] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:13.897 [2024-07-13 19:55:01.307449] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3674034 ] 00:07:13.897 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.897 [2024-07-13 19:55:01.476699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.897 [2024-07-13 19:55:01.498140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.897 [2024-07-13 19:55:01.550296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.157 [2024-07-13 19:55:01.566588] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:14.157 INFO: Running with entropic power schedule (0xFF, 100). 00:07:14.157 INFO: Seed: 288988498 00:07:14.157 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:14.157 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:14.157 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:14.157 INFO: A corpus is not provided, starting from an empty corpus 00:07:14.157 #2 INITED exec/s: 0 rss: 62Mb 00:07:14.157 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:14.157 This may also happen if the target rejected all inputs we tried so far 00:07:14.157 [2024-07-13 19:55:01.624788] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.157 [2024-07-13 19:55:01.624908] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.157 [2024-07-13 19:55:01.625018] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.157 [2024-07-13 19:55:01.625125] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.157 [2024-07-13 19:55:01.625342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.157 [2024-07-13 19:55:01.625375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.157 [2024-07-13 19:55:01.625429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.157 [2024-07-13 19:55:01.625448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.157 [2024-07-13 19:55:01.625511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.157 [2024-07-13 19:55:01.625525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.157 [2024-07-13 19:55:01.625578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.157 [2024-07-13 19:55:01.625592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.416 NEW_FUNC[1/692]: 0x4942b0 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:14.416 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:14.416 #6 NEW cov: 11881 ft: 11881 corp: 2/30b lim: 30 exec/s: 0 rss: 69Mb L: 29/29 MS: 4 ChangeBit-CopyPart-ChangeBit-InsertRepeatedBytes- 00:07:14.416 [2024-07-13 19:55:01.956295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.416 [2024-07-13 19:55:01.956359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.416 [2024-07-13 19:55:01.956450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.416 [2024-07-13 19:55:01.956477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.416 [2024-07-13 19:55:01.956554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.416 [2024-07-13 19:55:01.956580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.416 [2024-07-13 19:55:01.956656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.416 [2024-07-13 19:55:01.956682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.416 #11 NEW cov: 12028 ft: 12565 corp: 3/55b lim: 30 exec/s: 0 rss: 69Mb L: 25/29 MS: 5 ChangeByte-ShuffleBytes-InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:14.416 [2024-07-13 19:55:01.995588] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.416 [2024-07-13 19:55:01.995708] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.416 [2024-07-13 19:55:01.995816] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.416 [2024-07-13 19:55:01.995919] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.416 [2024-07-13 19:55:01.996124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.416 [2024-07-13 19:55:01.996151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.416 [2024-07-13 19:55:01.996207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.416 [2024-07-13 19:55:01.996222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.416 [2024-07-13 19:55:01.996273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.416 [2024-07-13 19:55:01.996287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.416 [2024-07-13 19:55:01.996341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.416 [2024-07-13 19:55:01.996355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.416 #12 NEW cov: 12034 ft: 12851 corp: 4/84b lim: 30 exec/s: 0 rss: 69Mb L: 29/29 MS: 1 ChangeByte- 00:07:14.416 [2024-07-13 19:55:02.045743] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90984) > buf size (4096) 00:07:14.416 [2024-07-13 19:55:02.045857] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:14.416 [2024-07-13 19:55:02.045960] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:14.416 [2024-07-13 19:55:02.046067] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:14.416 [2024-07-13 19:55:02.046279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58d90074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.416 [2024-07-13 19:55:02.046308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.416 [2024-07-13 19:55:02.046364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.416 [2024-07-13 19:55:02.046379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.416 [2024-07-13 19:55:02.046430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.416 [2024-07-13 19:55:02.046448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.416 [2024-07-13 19:55:02.046502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.416 [2024-07-13 19:55:02.046516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.416 #17 NEW cov: 12119 ft: 13224 corp: 5/113b lim: 30 exec/s: 0 rss: 69Mb L: 29/29 MS: 5 InsertByte-ChangeBinInt-CrossOver-ChangeByte-InsertRepeatedBytes- 00:07:14.674 [2024-07-13 19:55:02.085847] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90984) > buf size (4096) 00:07:14.674 [2024-07-13 19:55:02.085961] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:14.674 [2024-07-13 19:55:02.086065] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:14.674 [2024-07-13 19:55:02.086164] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:14.674 [2024-07-13 19:55:02.086370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58d90074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.674 [2024-07-13 19:55:02.086397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.674 [2024-07-13 19:55:02.086456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.674 [2024-07-13 19:55:02.086471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.086523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.086537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.086588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.086603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.675 #18 NEW cov: 12119 ft: 13285 corp: 6/142b lim: 30 exec/s: 0 rss: 69Mb L: 29/29 MS: 1 CopyPart- 00:07:14.675 [2024-07-13 19:55:02.136254] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x7e 00:07:14.675 [2024-07-13 19:55:02.136457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.136481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.136538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.136553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.136605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.136623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.136676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.136689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.675 #19 NEW cov: 12125 ft: 13368 corp: 7/168b lim: 30 exec/s: 0 rss: 69Mb L: 26/29 MS: 1 InsertByte- 00:07:14.675 [2024-07-13 19:55:02.186243] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (114692) > buf size (4096) 00:07:14.675 [2024-07-13 19:55:02.186453] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x7e 00:07:14.675 [2024-07-13 19:55:02.186670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.186696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.186751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:70000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.186766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.186820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.186833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.186887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.186902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.675 #20 NEW cov: 12125 ft: 13507 corp: 8/194b lim: 30 exec/s: 0 rss: 70Mb L: 26/29 MS: 1 ChangeByte- 00:07:14.675 [2024-07-13 19:55:02.236276] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (353128) > buf size (4096) 00:07:14.675 [2024-07-13 19:55:02.236393] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:14.675 [2024-07-13 19:55:02.236504] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:14.675 [2024-07-13 19:55:02.236625] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:14.675 [2024-07-13 19:55:02.236832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58d98100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.236859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.236913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.236928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.236984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.236998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.237052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.237070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.675 #21 NEW cov: 12125 ft: 13527 corp: 9/223b lim: 30 exec/s: 0 rss: 70Mb L: 29/29 MS: 1 ChangeBinInt- 00:07:14.675 [2024-07-13 19:55:02.286370] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:14.675 [2024-07-13 19:55:02.286773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.286800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.286855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.286869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.286925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.286939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.675 #22 NEW cov: 12125 ft: 14085 corp: 10/241b lim: 30 exec/s: 0 rss: 70Mb L: 18/29 MS: 1 CrossOver- 00:07:14.675 [2024-07-13 19:55:02.326556] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.675 [2024-07-13 19:55:02.326672] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.675 [2024-07-13 19:55:02.326780] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.675 [2024-07-13 19:55:02.326883] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.675 [2024-07-13 19:55:02.326986] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa0b 00:07:14.675 [2024-07-13 19:55:02.327187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.327212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.327269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.327284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.327339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.327353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.327409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.327423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.675 [2024-07-13 19:55:02.327477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.675 [2024-07-13 19:55:02.327491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:14.933 #23 NEW cov: 12125 ft: 14219 corp: 11/271b lim: 30 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 CopyPart- 00:07:14.933 [2024-07-13 19:55:02.366640] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.933 [2024-07-13 19:55:02.366757] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (876900) > buf size (4096) 00:07:14.933 [2024-07-13 19:55:02.366863] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.933 [2024-07-13 19:55:02.366966] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.933 [2024-07-13 19:55:02.367173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.933 [2024-07-13 19:55:02.367200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.933 [2024-07-13 19:55:02.367255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58588358 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.933 [2024-07-13 19:55:02.367270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.933 [2024-07-13 19:55:02.367326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.933 [2024-07-13 19:55:02.367341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.933 [2024-07-13 19:55:02.367394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.933 [2024-07-13 19:55:02.367408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.933 #24 NEW cov: 12125 ft: 14237 corp: 12/300b lim: 30 exec/s: 0 rss: 70Mb L: 29/30 MS: 1 ChangeBinInt- 00:07:14.933 [2024-07-13 19:55:02.406766] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.933 [2024-07-13 19:55:02.406880] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.933 [2024-07-13 19:55:02.406984] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.933 [2024-07-13 19:55:02.407089] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.933 [2024-07-13 19:55:02.407305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.933 [2024-07-13 19:55:02.407331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.933 [2024-07-13 19:55:02.407386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.933 [2024-07-13 19:55:02.407401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.933 [2024-07-13 19:55:02.407455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.933 [2024-07-13 19:55:02.407469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.933 [2024-07-13 19:55:02.407522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.933 [2024-07-13 19:55:02.407537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.933 #25 NEW cov: 12125 ft: 14263 corp: 13/329b lim: 30 exec/s: 0 rss: 70Mb L: 29/30 MS: 1 ShuffleBytes- 00:07:14.933 [2024-07-13 19:55:02.456952] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.933 [2024-07-13 19:55:02.457065] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.933 [2024-07-13 19:55:02.457177] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.933 [2024-07-13 19:55:02.457282] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.934 [2024-07-13 19:55:02.457386] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa0b 00:07:14.934 [2024-07-13 19:55:02.457601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.934 [2024-07-13 19:55:02.457628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.934 [2024-07-13 19:55:02.457685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.934 [2024-07-13 19:55:02.457698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.934 [2024-07-13 19:55:02.457751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.934 [2024-07-13 19:55:02.457764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.934 [2024-07-13 19:55:02.457816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.934 [2024-07-13 19:55:02.457829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.934 [2024-07-13 19:55:02.457882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.934 [2024-07-13 19:55:02.457897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:14.934 #26 NEW cov: 12125 ft: 14294 corp: 14/359b lim: 30 exec/s: 0 rss: 70Mb L: 30/30 MS: 1 ShuffleBytes- 00:07:14.934 [2024-07-13 19:55:02.507042] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.934 [2024-07-13 19:55:02.507159] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.934 [2024-07-13 19:55:02.507261] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa0b 00:07:14.934 [2024-07-13 19:55:02.507485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.934 [2024-07-13 19:55:02.507511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.934 [2024-07-13 19:55:02.507565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.934 [2024-07-13 19:55:02.507579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.934 [2024-07-13 19:55:02.507635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.934 [2024-07-13 19:55:02.507648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.934 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:14.934 #27 NEW cov: 12148 ft: 14380 corp: 15/377b lim: 30 exec/s: 0 rss: 70Mb L: 18/30 MS: 1 EraseBytes- 00:07:14.934 [2024-07-13 19:55:02.547127] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.934 [2024-07-13 19:55:02.547243] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.934 [2024-07-13 19:55:02.547349] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.934 [2024-07-13 19:55:02.547554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.934 [2024-07-13 19:55:02.547580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.934 [2024-07-13 19:55:02.547635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.934 [2024-07-13 19:55:02.547650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.934 [2024-07-13 19:55:02.547704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.934 [2024-07-13 19:55:02.547718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.934 #28 NEW cov: 12148 ft: 14412 corp: 16/398b lim: 30 exec/s: 0 rss: 70Mb L: 21/30 MS: 1 CrossOver- 00:07:14.934 [2024-07-13 19:55:02.587234] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (614756) > buf size (4096) 00:07:14.934 [2024-07-13 19:55:02.587349] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.934 [2024-07-13 19:55:02.587465] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:14.934 [2024-07-13 19:55:02.587676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580258 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.934 [2024-07-13 19:55:02.587702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.934 [2024-07-13 19:55:02.587758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.934 [2024-07-13 19:55:02.587773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.934 [2024-07-13 19:55:02.587824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.934 [2024-07-13 19:55:02.587838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.191 #29 NEW cov: 12148 ft: 14440 corp: 17/419b lim: 30 exec/s: 29 rss: 70Mb L: 21/30 MS: 1 ChangeByte- 00:07:15.191 [2024-07-13 19:55:02.637453] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000a7a7 00:07:15.191 [2024-07-13 19:55:02.637567] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.192 [2024-07-13 19:55:02.637681] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.192 [2024-07-13 19:55:02.637788] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.192 [2024-07-13 19:55:02.638002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5858839e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.638028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.192 [2024-07-13 19:55:02.638083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.638098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.192 [2024-07-13 19:55:02.638154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.638167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.192 [2024-07-13 19:55:02.638224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.638238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.192 #30 NEW cov: 12148 ft: 14490 corp: 18/448b lim: 30 exec/s: 30 rss: 70Mb L: 29/30 MS: 1 ChangeBinInt- 00:07:15.192 [2024-07-13 19:55:02.677539] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (24932) > buf size (4096) 00:07:15.192 [2024-07-13 19:55:02.677654] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.192 [2024-07-13 19:55:02.677762] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.192 [2024-07-13 19:55:02.677870] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.192 [2024-07-13 19:55:02.677978] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa0b 00:07:15.192 [2024-07-13 19:55:02.678197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:18580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.678223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.192 [2024-07-13 19:55:02.678276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.678291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.192 [2024-07-13 19:55:02.678344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.678358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.192 [2024-07-13 19:55:02.678411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.678426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.192 [2024-07-13 19:55:02.678480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.678494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.192 #31 NEW cov: 12148 ft: 14526 corp: 19/478b lim: 30 exec/s: 31 rss: 70Mb L: 30/30 MS: 1 ChangeBit- 00:07:15.192 [2024-07-13 19:55:02.727680] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.192 [2024-07-13 19:55:02.727793] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.192 [2024-07-13 19:55:02.727902] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90436) > buf size (4096) 00:07:15.192 [2024-07-13 19:55:02.728003] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.192 [2024-07-13 19:55:02.728232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.728258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.192 [2024-07-13 19:55:02.728312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.728327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.192 [2024-07-13 19:55:02.728386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58500058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.728401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.192 [2024-07-13 19:55:02.728455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.728469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.192 #32 NEW cov: 12148 ft: 14542 corp: 20/507b lim: 30 exec/s: 32 rss: 70Mb L: 29/30 MS: 1 ChangeBit- 00:07:15.192 [2024-07-13 19:55:02.777816] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:15.192 [2024-07-13 19:55:02.777933] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:15.192 [2024-07-13 19:55:02.778037] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:15.192 [2024-07-13 19:55:02.778365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.778392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.192 [2024-07-13 19:55:02.778452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.778467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.192 [2024-07-13 19:55:02.778521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.778535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.192 [2024-07-13 19:55:02.778587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.778601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.192 #33 NEW cov: 12148 ft: 14569 corp: 21/535b lim: 30 exec/s: 33 rss: 70Mb L: 28/30 MS: 1 InsertRepeatedBytes- 00:07:15.192 [2024-07-13 19:55:02.827906] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:15.192 [2024-07-13 19:55:02.828303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.828330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.192 [2024-07-13 19:55:02.828383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.828397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.192 [2024-07-13 19:55:02.828451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.192 [2024-07-13 19:55:02.828465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.192 #34 NEW cov: 12148 ft: 14603 corp: 22/554b lim: 30 exec/s: 34 rss: 70Mb L: 19/30 MS: 1 InsertByte- 00:07:15.450 [2024-07-13 19:55:02.868093] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (18788) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:02.868209] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:02.868319] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:02.868422] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:02.868534] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa0b 00:07:15.450 [2024-07-13 19:55:02.868742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:12580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.868768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:02.868821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.868836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:02.868885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.868899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:02.868953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.868968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:02.869018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.869032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.450 #35 NEW cov: 12148 ft: 14625 corp: 23/584b lim: 30 exec/s: 35 rss: 70Mb L: 30/30 MS: 1 ChangeByte- 00:07:15.450 [2024-07-13 19:55:02.908151] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (353128) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:02.908265] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:02.908369] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:02.908589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58d98100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.908616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:02.908668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.908682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:02.908736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.908750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.450 #36 NEW cov: 12148 ft: 14627 corp: 24/603b lim: 30 exec/s: 36 rss: 70Mb L: 19/30 MS: 1 EraseBytes- 00:07:15.450 [2024-07-13 19:55:02.958307] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (353128) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:02.958425] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:02.958541] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:02.958650] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:02.958858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58d98100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.958885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:02.958939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.958952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:02.959004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.959017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:02.959069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.959083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.450 #37 NEW cov: 12148 ft: 14647 corp: 25/632b lim: 30 exec/s: 37 rss: 70Mb L: 29/30 MS: 1 ChangeBit- 00:07:15.450 [2024-07-13 19:55:02.998434] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:02.998553] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:02.998660] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (59748) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:02.998765] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:02.998882] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa0b 00:07:15.450 [2024-07-13 19:55:02.999101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.999127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:02.999177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.999192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:02.999242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:3a580050 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.999256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:02.999305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.999319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:02.999373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:02.999386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.450 #38 NEW cov: 12148 ft: 14652 corp: 26/662b lim: 30 exec/s: 38 rss: 70Mb L: 30/30 MS: 1 InsertByte- 00:07:15.450 [2024-07-13 19:55:03.048576] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90984) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:03.048696] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:03.048804] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:03.048922] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:03.049143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58d90074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:03.049170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:03.049226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:03.049240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:03.049293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:03.049307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:03.049360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:03.049374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.450 #39 NEW cov: 12148 ft: 14664 corp: 27/691b lim: 30 exec/s: 39 rss: 70Mb L: 29/30 MS: 1 ChangeBit- 00:07:15.450 [2024-07-13 19:55:03.088684] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (353128) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:03.088799] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:03.088903] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:03.089009] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.450 [2024-07-13 19:55:03.089214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58d98100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:03.089241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:03.089296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:03.089311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:03.089362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:03.089375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.450 [2024-07-13 19:55:03.089428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.450 [2024-07-13 19:55:03.089446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.708 #40 NEW cov: 12148 ft: 14746 corp: 28/720b lim: 30 exec/s: 40 rss: 70Mb L: 29/30 MS: 1 ChangeByte- 00:07:15.708 [2024-07-13 19:55:03.138808] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.708 [2024-07-13 19:55:03.138924] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.708 [2024-07-13 19:55:03.139031] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.708 [2024-07-13 19:55:03.139232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.708 [2024-07-13 19:55:03.139259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.708 [2024-07-13 19:55:03.139316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.708 [2024-07-13 19:55:03.139330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.708 [2024-07-13 19:55:03.139384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.708 [2024-07-13 19:55:03.139398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.708 #41 NEW cov: 12148 ft: 14766 corp: 29/741b lim: 30 exec/s: 41 rss: 70Mb L: 21/30 MS: 1 EraseBytes- 00:07:15.708 [2024-07-13 19:55:03.188982] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.708 [2024-07-13 19:55:03.189100] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (876900) > buf size (4096) 00:07:15.708 [2024-07-13 19:55:03.189211] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.708 [2024-07-13 19:55:03.189316] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.708 [2024-07-13 19:55:03.189568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.708 [2024-07-13 19:55:03.189595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.708 [2024-07-13 19:55:03.189667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58588358 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.708 [2024-07-13 19:55:03.189682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.708 [2024-07-13 19:55:03.189737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.708 [2024-07-13 19:55:03.189750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.708 [2024-07-13 19:55:03.189805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.708 [2024-07-13 19:55:03.189820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.708 #42 NEW cov: 12148 ft: 14775 corp: 30/770b lim: 30 exec/s: 42 rss: 70Mb L: 29/30 MS: 1 ShuffleBytes- 00:07:15.708 [2024-07-13 19:55:03.239091] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.708 [2024-07-13 19:55:03.239220] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (876900) > buf size (4096) 00:07:15.708 [2024-07-13 19:55:03.239331] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.708 [2024-07-13 19:55:03.239437] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.708 [2024-07-13 19:55:03.239661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.708 [2024-07-13 19:55:03.239687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.708 [2024-07-13 19:55:03.239742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58588358 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.708 [2024-07-13 19:55:03.239760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.708 [2024-07-13 19:55:03.239813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.708 [2024-07-13 19:55:03.239826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.708 [2024-07-13 19:55:03.239880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.708 [2024-07-13 19:55:03.239894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.708 #43 NEW cov: 12148 ft: 14791 corp: 31/799b lim: 30 exec/s: 43 rss: 70Mb L: 29/30 MS: 1 ChangeBit- 00:07:15.708 [2024-07-13 19:55:03.279219] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.708 [2024-07-13 19:55:03.279334] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.708 [2024-07-13 19:55:03.279449] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.708 [2024-07-13 19:55:03.279558] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.708 [2024-07-13 19:55:03.279663] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x740b 00:07:15.708 [2024-07-13 19:55:03.279871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.708 [2024-07-13 19:55:03.279898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.708 [2024-07-13 19:55:03.279952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.708 [2024-07-13 19:55:03.279966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.708 [2024-07-13 19:55:03.280017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.709 [2024-07-13 19:55:03.280031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.709 [2024-07-13 19:55:03.280082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.709 [2024-07-13 19:55:03.280096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.709 [2024-07-13 19:55:03.280147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.709 [2024-07-13 19:55:03.280161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.709 #44 NEW cov: 12148 ft: 14822 corp: 32/829b lim: 30 exec/s: 44 rss: 70Mb L: 30/30 MS: 1 ChangeByte- 00:07:15.709 [2024-07-13 19:55:03.319344] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (353128) > buf size (4096) 00:07:15.709 [2024-07-13 19:55:03.319511] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.709 [2024-07-13 19:55:03.319618] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.709 [2024-07-13 19:55:03.319740] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.709 [2024-07-13 19:55:03.319963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58d98100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.709 [2024-07-13 19:55:03.319993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.709 [2024-07-13 19:55:03.320049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.709 [2024-07-13 19:55:03.320064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.709 [2024-07-13 19:55:03.320116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.709 [2024-07-13 19:55:03.320130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.709 [2024-07-13 19:55:03.320184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.709 [2024-07-13 19:55:03.320198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.709 #45 NEW cov: 12148 ft: 14868 corp: 33/858b lim: 30 exec/s: 45 rss: 71Mb L: 29/30 MS: 1 ChangeByte- 00:07:15.709 [2024-07-13 19:55:03.359372] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (614756) > buf size (4096) 00:07:15.709 [2024-07-13 19:55:03.359496] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (137572) > buf size (4096) 00:07:15.709 [2024-07-13 19:55:03.359606] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.709 [2024-07-13 19:55:03.359819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580258 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.709 [2024-07-13 19:55:03.359844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.709 [2024-07-13 19:55:03.359896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:86580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.709 [2024-07-13 19:55:03.359910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.709 [2024-07-13 19:55:03.359960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.709 [2024-07-13 19:55:03.359973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.967 #46 NEW cov: 12148 ft: 14901 corp: 34/880b lim: 30 exec/s: 46 rss: 71Mb L: 22/30 MS: 1 InsertByte- 00:07:15.967 [2024-07-13 19:55:03.409567] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (24932) > buf size (4096) 00:07:15.967 [2024-07-13 19:55:03.409683] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.967 [2024-07-13 19:55:03.409790] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.967 [2024-07-13 19:55:03.409998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:18580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.967 [2024-07-13 19:55:03.410025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.967 [2024-07-13 19:55:03.410080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.967 [2024-07-13 19:55:03.410095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.967 [2024-07-13 19:55:03.410149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.967 [2024-07-13 19:55:03.410166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.967 #47 NEW cov: 12148 ft: 14917 corp: 35/903b lim: 30 exec/s: 47 rss: 71Mb L: 23/30 MS: 1 EraseBytes- 00:07:15.967 [2024-07-13 19:55:03.459706] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.967 [2024-07-13 19:55:03.459818] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (876900) > buf size (4096) 00:07:15.967 [2024-07-13 19:55:03.459925] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.967 [2024-07-13 19:55:03.460033] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.967 [2024-07-13 19:55:03.460249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.967 [2024-07-13 19:55:03.460275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.967 [2024-07-13 19:55:03.460329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58588358 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.967 [2024-07-13 19:55:03.460344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.967 [2024-07-13 19:55:03.460397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.967 [2024-07-13 19:55:03.460413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.967 [2024-07-13 19:55:03.460468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.967 [2024-07-13 19:55:03.460483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.967 #48 NEW cov: 12148 ft: 14921 corp: 36/932b lim: 30 exec/s: 48 rss: 71Mb L: 29/30 MS: 1 ShuffleBytes- 00:07:15.967 [2024-07-13 19:55:03.499834] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (353128) > buf size (4096) 00:07:15.967 [2024-07-13 19:55:03.499948] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.967 [2024-07-13 19:55:03.500052] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.967 [2024-07-13 19:55:03.500153] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.967 [2024-07-13 19:55:03.500258] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (119252) > buf size (4096) 00:07:15.967 [2024-07-13 19:55:03.500470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58d98100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.967 [2024-07-13 19:55:03.500497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.967 [2024-07-13 19:55:03.500550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.967 [2024-07-13 19:55:03.500564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.967 [2024-07-13 19:55:03.500618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.967 [2024-07-13 19:55:03.500632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.967 [2024-07-13 19:55:03.500684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.967 [2024-07-13 19:55:03.500697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.967 [2024-07-13 19:55:03.500752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:74740074 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.967 [2024-07-13 19:55:03.500766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:15.968 #49 NEW cov: 12148 ft: 14925 corp: 37/962b lim: 30 exec/s: 49 rss: 71Mb L: 30/30 MS: 1 InsertByte- 00:07:15.968 [2024-07-13 19:55:03.539865] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x582a 00:07:15.968 [2024-07-13 19:55:03.539975] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.968 [2024-07-13 19:55:03.540079] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.968 [2024-07-13 19:55:03.540274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.968 [2024-07-13 19:55:03.540300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.968 [2024-07-13 19:55:03.540355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.968 [2024-07-13 19:55:03.540369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.968 [2024-07-13 19:55:03.540422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.968 [2024-07-13 19:55:03.540436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.968 #50 NEW cov: 12148 ft: 14938 corp: 38/984b lim: 30 exec/s: 50 rss: 71Mb L: 22/30 MS: 1 InsertByte- 00:07:15.968 [2024-07-13 19:55:03.580005] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.968 [2024-07-13 19:55:03.580115] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (876900) > buf size (4096) 00:07:15.968 [2024-07-13 19:55:03.580218] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.968 [2024-07-13 19:55:03.580318] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.968 [2024-07-13 19:55:03.580525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.968 [2024-07-13 19:55:03.580552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.968 [2024-07-13 19:55:03.580607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58588358 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.968 [2024-07-13 19:55:03.580621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.968 [2024-07-13 19:55:03.580674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.968 [2024-07-13 19:55:03.580688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.968 [2024-07-13 19:55:03.580742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.968 [2024-07-13 19:55:03.580758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:15.968 #51 NEW cov: 12148 ft: 14943 corp: 39/1013b lim: 30 exec/s: 51 rss: 71Mb L: 29/30 MS: 1 ChangeBinInt- 00:07:15.968 [2024-07-13 19:55:03.620114] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.968 [2024-07-13 19:55:03.620233] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (876900) > buf size (4096) 00:07:15.968 [2024-07-13 19:55:03.620337] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.968 [2024-07-13 19:55:03.620437] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90468) > buf size (4096) 00:07:15.968 [2024-07-13 19:55:03.620647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.968 [2024-07-13 19:55:03.620674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.968 [2024-07-13 19:55:03.620729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:58588358 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.968 [2024-07-13 19:55:03.620744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.968 [2024-07-13 19:55:03.620798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.968 [2024-07-13 19:55:03.620812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.968 [2024-07-13 19:55:03.620866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:58580058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.968 [2024-07-13 19:55:03.620880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.226 #52 NEW cov: 12148 ft: 14953 corp: 40/1042b lim: 30 exec/s: 26 rss: 71Mb L: 29/30 MS: 1 ShuffleBytes- 00:07:16.227 #52 DONE cov: 12148 ft: 14953 corp: 40/1042b lim: 30 exec/s: 26 rss: 71Mb 00:07:16.227 Done 52 runs in 2 second(s) 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4402 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:16.227 19:55:03 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:07:16.227 [2024-07-13 19:55:03.804683] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:16.227 [2024-07-13 19:55:03.804763] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3674518 ] 00:07:16.227 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.486 [2024-07-13 19:55:03.978923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.486 [2024-07-13 19:55:04.000222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.486 [2024-07-13 19:55:04.052476] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.486 [2024-07-13 19:55:04.068793] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:16.486 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.486 INFO: Seed: 2793986681 00:07:16.486 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:16.486 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:16.486 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:16.486 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.486 #2 INITED exec/s: 0 rss: 62Mb 00:07:16.486 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.486 This may also happen if the target rejected all inputs we tried so far 00:07:16.486 [2024-07-13 19:55:04.124257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.486 [2024-07-13 19:55:04.124290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.486 [2024-07-13 19:55:04.124351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.486 [2024-07-13 19:55:04.124368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.486 [2024-07-13 19:55:04.124427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.486 [2024-07-13 19:55:04.124447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.003 NEW_FUNC[1/691]: 0x496d60 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:17.003 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:17.003 #6 NEW cov: 11807 ft: 11821 corp: 2/27b lim: 35 exec/s: 0 rss: 69Mb L: 26/26 MS: 4 ChangeByte-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:07:17.003 [2024-07-13 19:55:04.455254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:b4b400d7 cdw11:b400b4b4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-07-13 19:55:04.455322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.003 [2024-07-13 19:55:04.455416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:b4b400b4 cdw11:b400b4b4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-07-13 19:55:04.455459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.003 #9 NEW cov: 11950 ft: 12809 corp: 3/41b lim: 35 exec/s: 0 rss: 70Mb L: 14/26 MS: 3 ChangeByte-ChangeBit-InsertRepeatedBytes- 00:07:17.003 [2024-07-13 19:55:04.504897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8e8e000a cdw11:8e008e8e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-07-13 19:55:04.504922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.003 #10 NEW cov: 11956 ft: 13348 corp: 4/53b lim: 35 exec/s: 0 rss: 70Mb L: 12/26 MS: 1 InsertRepeatedBytes- 00:07:17.003 [2024-07-13 19:55:04.545452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-07-13 19:55:04.545477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.003 [2024-07-13 19:55:04.545535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-07-13 19:55:04.545549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.003 [2024-07-13 19:55:04.545603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-07-13 19:55:04.545617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.003 [2024-07-13 19:55:04.545671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-07-13 19:55:04.545684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.003 [2024-07-13 19:55:04.545741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-07-13 19:55:04.545755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.003 #11 NEW cov: 12041 ft: 14089 corp: 5/88b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:07:17.003 [2024-07-13 19:55:04.595265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-07-13 19:55:04.595290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.003 [2024-07-13 19:55:04.595347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-07-13 19:55:04.595361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.003 #12 NEW cov: 12041 ft: 14152 corp: 6/107b lim: 35 exec/s: 0 rss: 70Mb L: 19/35 MS: 1 EraseBytes- 00:07:17.003 [2024-07-13 19:55:04.645749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b0072 cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-07-13 19:55:04.645774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.003 [2024-07-13 19:55:04.645830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-07-13 19:55:04.645845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.003 [2024-07-13 19:55:04.645896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-07-13 19:55:04.645909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.003 [2024-07-13 19:55:04.645961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-07-13 19:55:04.645981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.003 [2024-07-13 19:55:04.646040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-07-13 19:55:04.646053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.261 #13 NEW cov: 12041 ft: 14184 corp: 7/142b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 ChangeBit- 00:07:17.261 [2024-07-13 19:55:04.685646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.261 [2024-07-13 19:55:04.685670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.261 [2024-07-13 19:55:04.685724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.261 [2024-07-13 19:55:04.685738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.261 [2024-07-13 19:55:04.685791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:413b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.261 [2024-07-13 19:55:04.685805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.261 #14 NEW cov: 12041 ft: 14231 corp: 8/168b lim: 35 exec/s: 0 rss: 70Mb L: 26/35 MS: 1 ChangeByte- 00:07:17.261 [2024-07-13 19:55:04.725786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.261 [2024-07-13 19:55:04.725811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.261 [2024-07-13 19:55:04.725866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.261 [2024-07-13 19:55:04.725880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.261 [2024-07-13 19:55:04.725949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:413b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.261 [2024-07-13 19:55:04.725964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.261 #15 NEW cov: 12041 ft: 14325 corp: 9/194b lim: 35 exec/s: 0 rss: 70Mb L: 26/35 MS: 1 ShuffleBytes- 00:07:17.261 [2024-07-13 19:55:04.775777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.261 [2024-07-13 19:55:04.775803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.261 [2024-07-13 19:55:04.775861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.261 [2024-07-13 19:55:04.775874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.261 #16 NEW cov: 12041 ft: 14361 corp: 10/213b lim: 35 exec/s: 0 rss: 70Mb L: 19/35 MS: 1 ShuffleBytes- 00:07:17.261 [2024-07-13 19:55:04.825901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.261 [2024-07-13 19:55:04.825926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.261 [2024-07-13 19:55:04.825980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.261 [2024-07-13 19:55:04.825994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.261 #17 NEW cov: 12041 ft: 14409 corp: 11/232b lim: 35 exec/s: 0 rss: 70Mb L: 19/35 MS: 1 ChangeByte- 00:07:17.261 [2024-07-13 19:55:04.876172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.261 [2024-07-13 19:55:04.876196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.261 [2024-07-13 19:55:04.876250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.261 [2024-07-13 19:55:04.876264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.261 [2024-07-13 19:55:04.876316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.261 [2024-07-13 19:55:04.876331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.261 #18 NEW cov: 12041 ft: 14449 corp: 12/253b lim: 35 exec/s: 0 rss: 70Mb L: 21/35 MS: 1 CMP- DE: "\377\377"- 00:07:17.261 [2024-07-13 19:55:04.916048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a8e000a cdw11:8e008e8e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.261 [2024-07-13 19:55:04.916075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.519 #19 NEW cov: 12041 ft: 14582 corp: 13/266b lim: 35 exec/s: 0 rss: 70Mb L: 13/35 MS: 1 CrossOver- 00:07:17.519 [2024-07-13 19:55:04.956409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:04.956436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.519 [2024-07-13 19:55:04.956498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b001a3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:04.956513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.519 [2024-07-13 19:55:04.956566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:413b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:04.956580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.519 #20 NEW cov: 12041 ft: 14640 corp: 14/292b lim: 35 exec/s: 0 rss: 70Mb L: 26/35 MS: 1 ChangeBinInt- 00:07:17.519 [2024-07-13 19:55:04.996747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:04.996773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.519 [2024-07-13 19:55:04.996833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:04.996847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.519 [2024-07-13 19:55:04.996902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:04.996916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.519 [2024-07-13 19:55:04.996971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:04.996989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.519 [2024-07-13 19:55:04.997061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:04.997075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:17.519 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:17.519 #21 NEW cov: 12064 ft: 14686 corp: 15/327b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:17.519 [2024-07-13 19:55:05.036484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:05.036509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.519 [2024-07-13 19:55:05.036567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b33 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:05.036581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.519 #22 NEW cov: 12064 ft: 14735 corp: 16/346b lim: 35 exec/s: 0 rss: 70Mb L: 19/35 MS: 1 ChangeBit- 00:07:17.519 [2024-07-13 19:55:05.076616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:05.076641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.519 [2024-07-13 19:55:05.076697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b0033 cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:05.076712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.519 #23 NEW cov: 12064 ft: 14756 corp: 17/361b lim: 35 exec/s: 23 rss: 70Mb L: 15/35 MS: 1 EraseBytes- 00:07:17.519 [2024-07-13 19:55:05.126746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b003b cdw11:3b003b7a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:05.126772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.519 [2024-07-13 19:55:05.126828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:05.126842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.519 #24 NEW cov: 12064 ft: 14807 corp: 18/380b lim: 35 exec/s: 24 rss: 70Mb L: 19/35 MS: 1 ShuffleBytes- 00:07:17.519 [2024-07-13 19:55:05.167089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8e8e000a cdw11:3b008e3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:05.167115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.519 [2024-07-13 19:55:05.167170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:05.167185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.519 [2024-07-13 19:55:05.167239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:05.167252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.519 [2024-07-13 19:55:05.167305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:8e8e008e cdw11:8e008e8e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.519 [2024-07-13 19:55:05.167321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.778 #25 NEW cov: 12064 ft: 14874 corp: 19/409b lim: 35 exec/s: 25 rss: 70Mb L: 29/35 MS: 1 CrossOver- 00:07:17.778 [2024-07-13 19:55:05.217268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.778 [2024-07-13 19:55:05.217294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.778 [2024-07-13 19:55:05.217353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.778 [2024-07-13 19:55:05.217368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.778 [2024-07-13 19:55:05.217423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.778 [2024-07-13 19:55:05.217436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.778 [2024-07-13 19:55:05.217496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:3bff003b cdw11:3b00ff3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.778 [2024-07-13 19:55:05.217509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.778 #26 NEW cov: 12064 ft: 14938 corp: 20/438b lim: 35 exec/s: 26 rss: 70Mb L: 29/35 MS: 1 InsertRepeatedBytes- 00:07:17.778 [2024-07-13 19:55:05.267417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.779 [2024-07-13 19:55:05.267447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.779 [2024-07-13 19:55:05.267500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.779 [2024-07-13 19:55:05.267515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.779 [2024-07-13 19:55:05.267568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.779 [2024-07-13 19:55:05.267582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.779 [2024-07-13 19:55:05.267633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:3bff003b cdw11:3b00ff3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.779 [2024-07-13 19:55:05.267646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.779 #27 NEW cov: 12064 ft: 14958 corp: 21/467b lim: 35 exec/s: 27 rss: 70Mb L: 29/35 MS: 1 ChangeBinInt- 00:07:17.779 [2024-07-13 19:55:05.317167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff3b00ff cdw11:00003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.779 [2024-07-13 19:55:05.317191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.779 #32 NEW cov: 12064 ft: 14969 corp: 22/476b lim: 35 exec/s: 32 rss: 70Mb L: 9/35 MS: 5 CMP-ShuffleBytes-CrossOver-PersAutoDict-CMP- DE: "\000\037"-"\377\377"-"\000\000\000\000"- 00:07:17.779 [2024-07-13 19:55:05.357424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b0011 cdw11:7a003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.779 [2024-07-13 19:55:05.357455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.779 [2024-07-13 19:55:05.357512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.779 [2024-07-13 19:55:05.357525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.779 #33 NEW cov: 12064 ft: 14975 corp: 23/496b lim: 35 exec/s: 33 rss: 70Mb L: 20/35 MS: 1 InsertByte- 00:07:17.779 [2024-07-13 19:55:05.407573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.779 [2024-07-13 19:55:05.407599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.779 [2024-07-13 19:55:05.407653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b33 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.779 [2024-07-13 19:55:05.407668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.779 #34 NEW cov: 12064 ft: 14984 corp: 24/515b lim: 35 exec/s: 34 rss: 70Mb L: 19/35 MS: 1 ShuffleBytes- 00:07:18.037 [2024-07-13 19:55:05.447958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b00007a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.037 [2024-07-13 19:55:05.447983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.037 [2024-07-13 19:55:05.448038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.037 [2024-07-13 19:55:05.448052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.037 [2024-07-13 19:55:05.448106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.037 [2024-07-13 19:55:05.448120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.037 [2024-07-13 19:55:05.448173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:3bff003b cdw11:3b00ff3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.037 [2024-07-13 19:55:05.448186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.037 #35 NEW cov: 12064 ft: 14994 corp: 25/544b lim: 35 exec/s: 35 rss: 70Mb L: 29/35 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:18.037 [2024-07-13 19:55:05.497793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.037 [2024-07-13 19:55:05.497819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.037 [2024-07-13 19:55:05.497872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b0033 cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.037 [2024-07-13 19:55:05.497886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.037 #36 NEW cov: 12064 ft: 15004 corp: 26/559b lim: 35 exec/s: 36 rss: 71Mb L: 15/35 MS: 1 ChangeBinInt- 00:07:18.037 [2024-07-13 19:55:05.548093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff003b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.037 [2024-07-13 19:55:05.548120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.037 [2024-07-13 19:55:05.548174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b007a3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.037 [2024-07-13 19:55:05.548192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.037 [2024-07-13 19:55:05.548243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.037 [2024-07-13 19:55:05.548257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.037 #37 NEW cov: 12064 ft: 15014 corp: 27/584b lim: 35 exec/s: 37 rss: 71Mb L: 25/35 MS: 1 InsertRepeatedBytes- 00:07:18.037 [2024-07-13 19:55:05.588082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.037 [2024-07-13 19:55:05.588119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.037 [2024-07-13 19:55:05.588174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b0033 cdw11:27003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.037 [2024-07-13 19:55:05.588188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.037 #38 NEW cov: 12064 ft: 15022 corp: 28/599b lim: 35 exec/s: 38 rss: 71Mb L: 15/35 MS: 1 ChangeByte- 00:07:18.037 [2024-07-13 19:55:05.628184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b0072 cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.037 [2024-07-13 19:55:05.628208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.037 [2024-07-13 19:55:05.628262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.037 [2024-07-13 19:55:05.628276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.037 #39 NEW cov: 12064 ft: 15048 corp: 29/617b lim: 35 exec/s: 39 rss: 71Mb L: 18/35 MS: 1 EraseBytes- 00:07:18.037 [2024-07-13 19:55:05.678338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff3b00ff cdw11:00003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.037 [2024-07-13 19:55:05.678363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.037 [2024-07-13 19:55:05.678417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b00ff cdw11:00003b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.037 [2024-07-13 19:55:05.678430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.295 #40 NEW cov: 12064 ft: 15069 corp: 30/635b lim: 35 exec/s: 40 rss: 71Mb L: 18/35 MS: 1 CopyPart- 00:07:18.295 [2024-07-13 19:55:05.728536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.295 [2024-07-13 19:55:05.728561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.295 [2024-07-13 19:55:05.728616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.295 [2024-07-13 19:55:05.728631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.295 #41 NEW cov: 12064 ft: 15082 corp: 31/652b lim: 35 exec/s: 41 rss: 71Mb L: 17/35 MS: 1 EraseBytes- 00:07:18.295 [2024-07-13 19:55:05.768587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.295 [2024-07-13 19:55:05.768611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.295 [2024-07-13 19:55:05.768667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b33 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.295 [2024-07-13 19:55:05.768681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.295 #42 NEW cov: 12064 ft: 15087 corp: 32/671b lim: 35 exec/s: 42 rss: 71Mb L: 19/35 MS: 1 ChangeBit- 00:07:18.295 [2024-07-13 19:55:05.819008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.295 [2024-07-13 19:55:05.819033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.295 [2024-07-13 19:55:05.819089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.295 [2024-07-13 19:55:05.819103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.295 [2024-07-13 19:55:05.819159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:413b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.295 [2024-07-13 19:55:05.819174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.295 [2024-07-13 19:55:05.819227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.295 [2024-07-13 19:55:05.819241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.295 #43 NEW cov: 12064 ft: 15104 corp: 33/699b lim: 35 exec/s: 43 rss: 71Mb L: 28/35 MS: 1 CrossOver- 00:07:18.296 [2024-07-13 19:55:05.858963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.296 [2024-07-13 19:55:05.858988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.296 [2024-07-13 19:55:05.859044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.296 [2024-07-13 19:55:05.859059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.296 [2024-07-13 19:55:05.859111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.296 [2024-07-13 19:55:05.859125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.296 #44 NEW cov: 12064 ft: 15113 corp: 34/724b lim: 35 exec/s: 44 rss: 71Mb L: 25/35 MS: 1 EraseBytes- 00:07:18.296 [2024-07-13 19:55:05.909114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff003b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.296 [2024-07-13 19:55:05.909139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.296 [2024-07-13 19:55:05.909192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b007a3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.296 [2024-07-13 19:55:05.909206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.296 [2024-07-13 19:55:05.909261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.296 [2024-07-13 19:55:05.909275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.296 #45 NEW cov: 12064 ft: 15123 corp: 35/749b lim: 35 exec/s: 45 rss: 71Mb L: 25/35 MS: 1 ChangeByte- 00:07:18.555 [2024-07-13 19:55:05.959417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.555 [2024-07-13 19:55:05.959446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.555 [2024-07-13 19:55:05.959503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.555 [2024-07-13 19:55:05.959517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.555 [2024-07-13 19:55:05.959570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.555 [2024-07-13 19:55:05.959584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.555 [2024-07-13 19:55:05.959636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.556 [2024-07-13 19:55:05.959649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.556 #46 NEW cov: 12064 ft: 15127 corp: 36/777b lim: 35 exec/s: 46 rss: 71Mb L: 28/35 MS: 1 CopyPart- 00:07:18.556 [2024-07-13 19:55:05.999680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b007a cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.556 [2024-07-13 19:55:05.999705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.556 [2024-07-13 19:55:05.999756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.556 [2024-07-13 19:55:05.999769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.556 [2024-07-13 19:55:05.999824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.556 [2024-07-13 19:55:05.999837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:18.556 [2024-07-13 19:55:05.999894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.556 [2024-07-13 19:55:05.999908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:18.556 [2024-07-13 19:55:05.999963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.556 [2024-07-13 19:55:05.999977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:18.556 #47 NEW cov: 12064 ft: 15133 corp: 37/812b lim: 35 exec/s: 47 rss: 71Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:18.556 [2024-07-13 19:55:06.039216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.556 [2024-07-13 19:55:06.039241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.556 #48 NEW cov: 12064 ft: 15140 corp: 38/819b lim: 35 exec/s: 48 rss: 71Mb L: 7/35 MS: 1 EraseBytes- 00:07:18.556 [2024-07-13 19:55:06.079447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.556 [2024-07-13 19:55:06.079472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.556 [2024-07-13 19:55:06.079533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:3b3b003b cdw11:3b003b3b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.556 [2024-07-13 19:55:06.079547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:18.556 #49 NEW cov: 12064 ft: 15221 corp: 39/835b lim: 35 exec/s: 49 rss: 71Mb L: 16/35 MS: 1 EraseBytes- 00:07:18.556 [2024-07-13 19:55:06.119440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8e8e000a cdw11:8e008e8e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.556 [2024-07-13 19:55:06.119471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:18.556 #50 NEW cov: 12064 ft: 15234 corp: 40/847b lim: 35 exec/s: 25 rss: 71Mb L: 12/35 MS: 1 ShuffleBytes- 00:07:18.556 #50 DONE cov: 12064 ft: 15234 corp: 40/847b lim: 35 exec/s: 25 rss: 71Mb 00:07:18.556 ###### Recommended dictionary. ###### 00:07:18.556 "\377\377" # Uses: 1 00:07:18.556 "\000\037" # Uses: 0 00:07:18.556 "\000\000\000\000" # Uses: 1 00:07:18.556 ###### End of recommended dictionary. ###### 00:07:18.556 Done 50 runs in 2 second(s) 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4403 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:18.815 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:18.816 19:55:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:07:18.816 [2024-07-13 19:55:06.297128] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:18.816 [2024-07-13 19:55:06.297196] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3674855 ] 00:07:18.816 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.816 [2024-07-13 19:55:06.471987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.074 [2024-07-13 19:55:06.494350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.074 [2024-07-13 19:55:06.546626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.074 [2024-07-13 19:55:06.562936] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:19.074 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.074 INFO: Seed: 993025198 00:07:19.074 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:19.074 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:19.074 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:19.074 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.074 #2 INITED exec/s: 0 rss: 62Mb 00:07:19.074 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.074 This may also happen if the target rejected all inputs we tried so far 00:07:19.332 NEW_FUNC[1/678]: 0x498a30 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:19.332 NEW_FUNC[2/678]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:19.332 #7 NEW cov: 11691 ft: 11718 corp: 2/5b lim: 20 exec/s: 0 rss: 69Mb L: 4/4 MS: 5 CopyPart-InsertByte-InsertByte-ChangeByte-InsertByte- 00:07:19.332 NEW_FUNC[1/2]: 0x1a7c090 in event_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:528 00:07:19.332 NEW_FUNC[2/2]: 0x1a81430 in _reactor_run /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:894 00:07:19.332 #9 NEW cov: 11864 ft: 12726 corp: 3/17b lim: 20 exec/s: 0 rss: 69Mb L: 12/12 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:19.613 [2024-07-13 19:55:07.010811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:19.613 [2024-07-13 19:55:07.010852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.613 NEW_FUNC[1/20]: 0x11e7640 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3333 00:07:19.613 NEW_FUNC[2/20]: 0x11e81c0 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3275 00:07:19.613 #12 NEW cov: 12209 ft: 13458 corp: 4/35b lim: 20 exec/s: 0 rss: 69Mb L: 18/18 MS: 3 EraseBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:19.613 [2024-07-13 19:55:07.081001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:19.613 [2024-07-13 19:55:07.081033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.613 #13 NEW cov: 12294 ft: 13743 corp: 5/53b lim: 20 exec/s: 0 rss: 69Mb L: 18/18 MS: 1 ChangeBit- 00:07:19.613 #14 NEW cov: 12294 ft: 13859 corp: 6/57b lim: 20 exec/s: 0 rss: 69Mb L: 4/18 MS: 1 ChangeBit- 00:07:19.613 #15 NEW cov: 12294 ft: 13927 corp: 7/61b lim: 20 exec/s: 0 rss: 70Mb L: 4/18 MS: 1 ChangeBit- 00:07:19.613 [2024-07-13 19:55:07.241539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:19.613 [2024-07-13 19:55:07.241567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.613 #16 NEW cov: 12294 ft: 13995 corp: 8/79b lim: 20 exec/s: 0 rss: 70Mb L: 18/18 MS: 1 ChangeByte- 00:07:19.872 #17 NEW cov: 12294 ft: 14127 corp: 9/99b lim: 20 exec/s: 0 rss: 70Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:19.872 #23 NEW cov: 12294 ft: 14216 corp: 10/111b lim: 20 exec/s: 0 rss: 70Mb L: 12/20 MS: 1 ChangeByte- 00:07:19.872 #24 NEW cov: 12294 ft: 14329 corp: 11/116b lim: 20 exec/s: 0 rss: 70Mb L: 5/20 MS: 1 CrossOver- 00:07:19.872 [2024-07-13 19:55:07.462220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:19.872 [2024-07-13 19:55:07.462249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.872 #25 NEW cov: 12294 ft: 14348 corp: 12/135b lim: 20 exec/s: 0 rss: 70Mb L: 19/20 MS: 1 InsertByte- 00:07:20.131 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:20.131 #26 NEW cov: 12317 ft: 14465 corp: 13/150b lim: 20 exec/s: 0 rss: 70Mb L: 15/20 MS: 1 InsertRepeatedBytes- 00:07:20.131 [2024-07-13 19:55:07.572937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.131 [2024-07-13 19:55:07.572964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.131 #32 NEW cov: 12320 ft: 14634 corp: 14/170b lim: 20 exec/s: 32 rss: 70Mb L: 20/20 MS: 1 CrossOver- 00:07:20.131 [2024-07-13 19:55:07.632775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.131 [2024-07-13 19:55:07.632809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.131 #33 NEW cov: 12320 ft: 14648 corp: 15/188b lim: 20 exec/s: 33 rss: 70Mb L: 18/20 MS: 1 CopyPart- 00:07:20.131 [2024-07-13 19:55:07.683153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.131 [2024-07-13 19:55:07.683185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.131 #34 NEW cov: 12320 ft: 14665 corp: 16/208b lim: 20 exec/s: 34 rss: 70Mb L: 20/20 MS: 1 ShuffleBytes- 00:07:20.131 #35 NEW cov: 12320 ft: 14687 corp: 17/225b lim: 20 exec/s: 35 rss: 70Mb L: 17/20 MS: 1 InsertRepeatedBytes- 00:07:20.390 [2024-07-13 19:55:07.802728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.390 [2024-07-13 19:55:07.802760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.390 #36 NEW cov: 12321 ft: 14904 corp: 18/234b lim: 20 exec/s: 36 rss: 70Mb L: 9/20 MS: 1 EraseBytes- 00:07:20.390 #37 NEW cov: 12321 ft: 14944 corp: 19/238b lim: 20 exec/s: 37 rss: 70Mb L: 4/20 MS: 1 ChangeBinInt- 00:07:20.390 [2024-07-13 19:55:07.913056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.390 [2024-07-13 19:55:07.913090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.390 #38 NEW cov: 12321 ft: 14960 corp: 20/247b lim: 20 exec/s: 38 rss: 70Mb L: 9/20 MS: 1 ChangeBit- 00:07:20.390 [2024-07-13 19:55:07.973959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.390 [2024-07-13 19:55:07.973988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.390 #39 NEW cov: 12321 ft: 15032 corp: 21/267b lim: 20 exec/s: 39 rss: 70Mb L: 20/20 MS: 1 CrossOver- 00:07:20.648 #40 NEW cov: 12321 ft: 15113 corp: 22/274b lim: 20 exec/s: 40 rss: 70Mb L: 7/20 MS: 1 EraseBytes- 00:07:20.648 #41 NEW cov: 12321 ft: 15164 corp: 23/286b lim: 20 exec/s: 41 rss: 70Mb L: 12/20 MS: 1 CrossOver- 00:07:20.648 [2024-07-13 19:55:08.134243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.648 [2024-07-13 19:55:08.134272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.648 #42 NEW cov: 12321 ft: 15176 corp: 24/304b lim: 20 exec/s: 42 rss: 70Mb L: 18/20 MS: 1 ShuffleBytes- 00:07:20.648 #43 NEW cov: 12321 ft: 15201 corp: 25/319b lim: 20 exec/s: 43 rss: 70Mb L: 15/20 MS: 1 CrossOver- 00:07:20.648 #44 NEW cov: 12321 ft: 15242 corp: 26/326b lim: 20 exec/s: 44 rss: 70Mb L: 7/20 MS: 1 ChangeBinInt- 00:07:20.907 #45 NEW cov: 12321 ft: 15259 corp: 27/344b lim: 20 exec/s: 45 rss: 70Mb L: 18/20 MS: 1 InsertByte- 00:07:20.907 [2024-07-13 19:55:08.375135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.907 [2024-07-13 19:55:08.375167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.907 #46 NEW cov: 12321 ft: 15297 corp: 28/364b lim: 20 exec/s: 46 rss: 70Mb L: 20/20 MS: 1 CopyPart- 00:07:20.907 #47 NEW cov: 12321 ft: 15325 corp: 29/368b lim: 20 exec/s: 47 rss: 71Mb L: 4/20 MS: 1 ShuffleBytes- 00:07:20.907 [2024-07-13 19:55:08.485409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:20.907 [2024-07-13 19:55:08.485445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.907 #48 NEW cov: 12321 ft: 15335 corp: 30/386b lim: 20 exec/s: 48 rss: 71Mb L: 18/20 MS: 1 ChangeByte- 00:07:20.907 #49 NEW cov: 12321 ft: 15358 corp: 31/398b lim: 20 exec/s: 49 rss: 71Mb L: 12/20 MS: 1 CopyPart- 00:07:21.166 [2024-07-13 19:55:08.585707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:21.166 [2024-07-13 19:55:08.585735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.166 #50 NEW cov: 12321 ft: 15375 corp: 32/417b lim: 20 exec/s: 25 rss: 71Mb L: 19/20 MS: 1 InsertByte- 00:07:21.166 #50 DONE cov: 12321 ft: 15375 corp: 32/417b lim: 20 exec/s: 25 rss: 71Mb 00:07:21.166 Done 50 runs in 2 second(s) 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4404 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:21.166 19:55:08 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:07:21.166 [2024-07-13 19:55:08.778234] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:21.166 [2024-07-13 19:55:08.778319] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3675380 ] 00:07:21.166 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.425 [2024-07-13 19:55:08.957308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.425 [2024-07-13 19:55:08.979748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.425 [2024-07-13 19:55:09.031807] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.425 [2024-07-13 19:55:09.048094] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:21.425 INFO: Running with entropic power schedule (0xFF, 100). 00:07:21.425 INFO: Seed: 3477024832 00:07:21.684 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:21.684 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:21.684 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:21.684 INFO: A corpus is not provided, starting from an empty corpus 00:07:21.684 #2 INITED exec/s: 0 rss: 62Mb 00:07:21.684 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:21.684 This may also happen if the target rejected all inputs we tried so far 00:07:21.684 [2024-07-13 19:55:09.114649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.684 [2024-07-13 19:55:09.114680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.684 [2024-07-13 19:55:09.114797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.684 [2024-07-13 19:55:09.114813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.684 [2024-07-13 19:55:09.114933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.684 [2024-07-13 19:55:09.114948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.944 NEW_FUNC[1/687]: 0x499b20 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:21.944 NEW_FUNC[2/687]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:21.944 #3 NEW cov: 11797 ft: 11797 corp: 2/26b lim: 35 exec/s: 0 rss: 69Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:21.944 [2024-07-13 19:55:09.445777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.944 [2024-07-13 19:55:09.445813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.944 [2024-07-13 19:55:09.445929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.944 [2024-07-13 19:55:09.445946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.944 [2024-07-13 19:55:09.446064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.944 [2024-07-13 19:55:09.446080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.944 [2024-07-13 19:55:09.446206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.944 [2024-07-13 19:55:09.446221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.944 NEW_FUNC[1/5]: 0x17aade0 in spdk_nvme_qpair_process_completions /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:757 00:07:21.944 NEW_FUNC[2/5]: 0x180d8d0 in nvme_transport_qpair_process_completions /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_transport.c:607 00:07:21.944 #4 NEW cov: 11971 ft: 12773 corp: 3/55b lim: 35 exec/s: 0 rss: 69Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:07:21.944 [2024-07-13 19:55:09.485771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.944 [2024-07-13 19:55:09.485797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.944 [2024-07-13 19:55:09.485912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.944 [2024-07-13 19:55:09.485928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.944 [2024-07-13 19:55:09.486037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.944 [2024-07-13 19:55:09.486053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.944 [2024-07-13 19:55:09.486167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.944 [2024-07-13 19:55:09.486183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.944 #5 NEW cov: 11977 ft: 13059 corp: 4/85b lim: 35 exec/s: 0 rss: 69Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:21.944 [2024-07-13 19:55:09.535918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.944 [2024-07-13 19:55:09.535948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.944 [2024-07-13 19:55:09.536057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.944 [2024-07-13 19:55:09.536072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.944 [2024-07-13 19:55:09.536182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.944 [2024-07-13 19:55:09.536198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.944 [2024-07-13 19:55:09.536305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.944 [2024-07-13 19:55:09.536320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.944 #6 NEW cov: 12062 ft: 13341 corp: 5/114b lim: 35 exec/s: 0 rss: 69Mb L: 29/30 MS: 1 ChangeBinInt- 00:07:21.944 [2024-07-13 19:55:09.585267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ff0000ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.944 [2024-07-13 19:55:09.585293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.204 #7 NEW cov: 12062 ft: 14188 corp: 6/126b lim: 35 exec/s: 0 rss: 70Mb L: 12/30 MS: 1 CrossOver- 00:07:22.204 [2024-07-13 19:55:09.635967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000006c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.204 [2024-07-13 19:55:09.635994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.204 [2024-07-13 19:55:09.636111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.204 [2024-07-13 19:55:09.636126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.204 [2024-07-13 19:55:09.636242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.204 [2024-07-13 19:55:09.636257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.204 #8 NEW cov: 12062 ft: 14285 corp: 7/151b lim: 35 exec/s: 0 rss: 70Mb L: 25/30 MS: 1 ChangeByte- 00:07:22.204 [2024-07-13 19:55:09.676639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:fcfc00ff cdw11:fcfc0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.204 [2024-07-13 19:55:09.676667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.204 [2024-07-13 19:55:09.676799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.204 [2024-07-13 19:55:09.676816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.204 [2024-07-13 19:55:09.676938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.204 [2024-07-13 19:55:09.676954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.204 [2024-07-13 19:55:09.677072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.204 [2024-07-13 19:55:09.677088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.204 [2024-07-13 19:55:09.677199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.204 [2024-07-13 19:55:09.677215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.204 #9 NEW cov: 12062 ft: 14421 corp: 8/186b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:22.204 [2024-07-13 19:55:09.715734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.204 [2024-07-13 19:55:09.715760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.204 #12 NEW cov: 12062 ft: 14495 corp: 9/196b lim: 35 exec/s: 0 rss: 70Mb L: 10/35 MS: 3 CopyPart-ShuffleBytes-CrossOver- 00:07:22.204 [2024-07-13 19:55:09.756874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.204 [2024-07-13 19:55:09.756899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.204 [2024-07-13 19:55:09.757017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.204 [2024-07-13 19:55:09.757032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.204 [2024-07-13 19:55:09.757143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.204 [2024-07-13 19:55:09.757159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.205 [2024-07-13 19:55:09.757270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.205 [2024-07-13 19:55:09.757285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.205 [2024-07-13 19:55:09.757400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.205 [2024-07-13 19:55:09.757415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.205 #13 NEW cov: 12062 ft: 14540 corp: 10/231b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:07:22.205 [2024-07-13 19:55:09.796434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.205 [2024-07-13 19:55:09.796463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.205 [2024-07-13 19:55:09.796587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.205 [2024-07-13 19:55:09.796603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.205 [2024-07-13 19:55:09.796711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.205 [2024-07-13 19:55:09.796727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.205 #14 NEW cov: 12062 ft: 14634 corp: 11/258b lim: 35 exec/s: 0 rss: 70Mb L: 27/35 MS: 1 CrossOver- 00:07:22.205 [2024-07-13 19:55:09.836558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.205 [2024-07-13 19:55:09.836583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.205 [2024-07-13 19:55:09.836711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.205 [2024-07-13 19:55:09.836728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.205 [2024-07-13 19:55:09.836841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.205 [2024-07-13 19:55:09.836858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.205 #15 NEW cov: 12062 ft: 14719 corp: 12/284b lim: 35 exec/s: 0 rss: 70Mb L: 26/35 MS: 1 EraseBytes- 00:07:22.464 [2024-07-13 19:55:09.876711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:09.876737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.464 [2024-07-13 19:55:09.876849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:09.876865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.464 [2024-07-13 19:55:09.876980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:09.876994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.464 #16 NEW cov: 12062 ft: 14750 corp: 13/310b lim: 35 exec/s: 0 rss: 70Mb L: 26/35 MS: 1 ChangeBinInt- 00:07:22.464 [2024-07-13 19:55:09.927390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:09.927417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.464 [2024-07-13 19:55:09.927533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:09.927550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.464 [2024-07-13 19:55:09.927667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:09.927681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.464 [2024-07-13 19:55:09.927793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:09.927807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.464 [2024-07-13 19:55:09.927915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:09.927930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.464 #17 NEW cov: 12062 ft: 14788 corp: 14/345b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 CrossOver- 00:07:22.464 [2024-07-13 19:55:09.976743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:09.976769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.464 [2024-07-13 19:55:09.976891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:09.976907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.464 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:22.464 #18 NEW cov: 12085 ft: 15047 corp: 15/359b lim: 35 exec/s: 0 rss: 70Mb L: 14/35 MS: 1 CrossOver- 00:07:22.464 [2024-07-13 19:55:10.017664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:10.017690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.464 [2024-07-13 19:55:10.017810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:10.017826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.464 [2024-07-13 19:55:10.017947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:10.017964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.464 [2024-07-13 19:55:10.018083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:10.018100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.464 [2024-07-13 19:55:10.018218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:10.018238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.464 #19 NEW cov: 12085 ft: 15085 corp: 16/394b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:22.464 [2024-07-13 19:55:10.067530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:10.067558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.464 [2024-07-13 19:55:10.067661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:10.067676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.464 [2024-07-13 19:55:10.067788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:10.067805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.464 [2024-07-13 19:55:10.067912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f9e8f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.464 [2024-07-13 19:55:10.067928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.464 #20 NEW cov: 12085 ft: 15095 corp: 17/424b lim: 35 exec/s: 20 rss: 70Mb L: 30/35 MS: 1 InsertByte- 00:07:22.723 [2024-07-13 19:55:10.127448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.127478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.723 [2024-07-13 19:55:10.127593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f9f93d cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.127609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.723 [2024-07-13 19:55:10.127726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.127744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.723 #21 NEW cov: 12085 ft: 15167 corp: 18/451b lim: 35 exec/s: 21 rss: 70Mb L: 27/35 MS: 1 InsertByte- 00:07:22.723 [2024-07-13 19:55:10.177065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a0f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.177091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.723 #23 NEW cov: 12085 ft: 15178 corp: 19/461b lim: 35 exec/s: 23 rss: 70Mb L: 10/35 MS: 2 CopyPart-CMP- DE: "\017\000\000\000\000\000\000\000"- 00:07:22.723 [2024-07-13 19:55:10.217429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.217458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.723 [2024-07-13 19:55:10.217587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.217602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.723 #24 NEW cov: 12085 ft: 15204 corp: 20/475b lim: 35 exec/s: 24 rss: 70Mb L: 14/35 MS: 1 ChangeBinInt- 00:07:22.723 [2024-07-13 19:55:10.268351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.268376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.723 [2024-07-13 19:55:10.268512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.268530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.723 [2024-07-13 19:55:10.268657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.268673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.723 [2024-07-13 19:55:10.268798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.268814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.723 [2024-07-13 19:55:10.268931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.268948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.723 #25 NEW cov: 12085 ft: 15208 corp: 21/510b lim: 35 exec/s: 25 rss: 70Mb L: 35/35 MS: 1 ChangeByte- 00:07:22.723 [2024-07-13 19:55:10.308199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.308226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.723 [2024-07-13 19:55:10.308355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.308371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.723 [2024-07-13 19:55:10.308490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.308505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.723 [2024-07-13 19:55:10.308622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.308640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.723 #26 NEW cov: 12085 ft: 15214 corp: 22/539b lim: 35 exec/s: 26 rss: 70Mb L: 29/35 MS: 1 ShuffleBytes- 00:07:22.723 [2024-07-13 19:55:10.348607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00800000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.348631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.723 [2024-07-13 19:55:10.348745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.348762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.723 [2024-07-13 19:55:10.348878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.348896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.723 [2024-07-13 19:55:10.349012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.723 [2024-07-13 19:55:10.349027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.724 [2024-07-13 19:55:10.349135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.724 [2024-07-13 19:55:10.349150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.724 #27 NEW cov: 12085 ft: 15227 corp: 23/574b lim: 35 exec/s: 27 rss: 70Mb L: 35/35 MS: 1 ChangeBit- 00:07:22.983 [2024-07-13 19:55:10.398771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.398796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.398918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.398934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.399052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.399067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.399187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.399202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.399319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.399335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.983 #28 NEW cov: 12085 ft: 15237 corp: 24/609b lim: 35 exec/s: 28 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:22.983 [2024-07-13 19:55:10.438930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.438955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.439062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.439077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.439188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.439204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.439320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.439339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.439463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.439477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.983 #29 NEW cov: 12085 ft: 15279 corp: 25/644b lim: 35 exec/s: 29 rss: 70Mb L: 35/35 MS: 1 ChangeByte- 00:07:22.983 [2024-07-13 19:55:10.478803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.478829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.478964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.478980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.479096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0606f606 cdw11:06060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.479112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.479226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.479244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.983 #30 NEW cov: 12085 ft: 15287 corp: 26/673b lim: 35 exec/s: 30 rss: 70Mb L: 29/35 MS: 1 ChangeBinInt- 00:07:22.983 [2024-07-13 19:55:10.518323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.518349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.518490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00008000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.518506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.983 #31 NEW cov: 12085 ft: 15309 corp: 27/687b lim: 35 exec/s: 31 rss: 70Mb L: 14/35 MS: 1 ChangeBit- 00:07:22.983 [2024-07-13 19:55:10.568435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.568463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.568595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.568611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.983 #32 NEW cov: 12085 ft: 15317 corp: 28/701b lim: 35 exec/s: 32 rss: 70Mb L: 14/35 MS: 1 ShuffleBytes- 00:07:22.983 [2024-07-13 19:55:10.609376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.609402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.609542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.609561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.609694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f9f900f9 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.609710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.609826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.609842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.983 [2024-07-13 19:55:10.609958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.983 [2024-07-13 19:55:10.609974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.983 #33 NEW cov: 12085 ft: 15328 corp: 29/736b lim: 35 exec/s: 33 rss: 70Mb L: 35/35 MS: 1 CrossOver- 00:07:23.243 [2024-07-13 19:55:10.658839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.658866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.243 [2024-07-13 19:55:10.658995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f906f9 cdw11:e8f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.659011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.243 #34 NEW cov: 12085 ft: 15396 corp: 30/753b lim: 35 exec/s: 34 rss: 70Mb L: 17/35 MS: 1 EraseBytes- 00:07:23.243 [2024-07-13 19:55:10.709578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000005b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.709604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.243 [2024-07-13 19:55:10.709744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.709761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.243 [2024-07-13 19:55:10.709874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.709891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.243 [2024-07-13 19:55:10.710036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.710053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.243 #35 NEW cov: 12085 ft: 15407 corp: 31/781b lim: 35 exec/s: 35 rss: 70Mb L: 28/35 MS: 1 InsertByte- 00:07:23.243 [2024-07-13 19:55:10.759120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.759145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.243 [2024-07-13 19:55:10.759260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000500 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.759277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.243 #36 NEW cov: 12085 ft: 15420 corp: 32/795b lim: 35 exec/s: 36 rss: 70Mb L: 14/35 MS: 1 ChangeBinInt- 00:07:23.243 [2024-07-13 19:55:10.809251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.809277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.243 [2024-07-13 19:55:10.809404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.809429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.243 #37 NEW cov: 12085 ft: 15434 corp: 33/809b lim: 35 exec/s: 37 rss: 70Mb L: 14/35 MS: 1 ChangeByte- 00:07:23.243 [2024-07-13 19:55:10.849892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.849916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.243 [2024-07-13 19:55:10.850043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.850058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.243 [2024-07-13 19:55:10.850178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f9f90606 cdw11:f9060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.850194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.243 [2024-07-13 19:55:10.850310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:06f90606 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.850326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.243 #38 NEW cov: 12085 ft: 15465 corp: 34/841b lim: 35 exec/s: 38 rss: 70Mb L: 32/35 MS: 1 CopyPart- 00:07:23.243 [2024-07-13 19:55:10.899521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.899546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.243 [2024-07-13 19:55:10.899678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00004300 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.243 [2024-07-13 19:55:10.899693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.503 #39 NEW cov: 12085 ft: 15469 corp: 35/856b lim: 35 exec/s: 39 rss: 71Mb L: 15/35 MS: 1 InsertByte- 00:07:23.503 [2024-07-13 19:55:10.949616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:21000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.503 [2024-07-13 19:55:10.949644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.503 [2024-07-13 19:55:10.949770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.503 [2024-07-13 19:55:10.949788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.503 #40 NEW cov: 12085 ft: 15483 corp: 36/871b lim: 35 exec/s: 40 rss: 71Mb L: 15/35 MS: 1 InsertByte- 00:07:23.503 [2024-07-13 19:55:10.990333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.503 [2024-07-13 19:55:10.990360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.503 [2024-07-13 19:55:10.990474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.503 [2024-07-13 19:55:10.990490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.503 [2024-07-13 19:55:10.990620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:f9060000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.503 [2024-07-13 19:55:10.990636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.503 [2024-07-13 19:55:10.990746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f9e806f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.503 [2024-07-13 19:55:10.990763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.503 #41 NEW cov: 12085 ft: 15491 corp: 37/901b lim: 35 exec/s: 41 rss: 71Mb L: 30/35 MS: 1 ShuffleBytes- 00:07:23.503 [2024-07-13 19:55:11.029885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.503 [2024-07-13 19:55:11.029911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.503 [2024-07-13 19:55:11.030023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.503 [2024-07-13 19:55:11.030037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.503 #42 NEW cov: 12085 ft: 15499 corp: 38/918b lim: 35 exec/s: 42 rss: 71Mb L: 17/35 MS: 1 EraseBytes- 00:07:23.503 [2024-07-13 19:55:11.070589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.503 [2024-07-13 19:55:11.070615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.503 [2024-07-13 19:55:11.070749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f9f9f9f9 cdw11:f9f90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.503 [2024-07-13 19:55:11.070767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.503 [2024-07-13 19:55:11.070875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:06060606 cdw11:f9060002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.503 [2024-07-13 19:55:11.070890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.503 [2024-07-13 19:55:11.071014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:f9f90606 cdw11:e8f90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.503 [2024-07-13 19:55:11.071031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.503 #43 NEW cov: 12085 ft: 15503 corp: 39/949b lim: 35 exec/s: 21 rss: 71Mb L: 31/35 MS: 1 InsertByte- 00:07:23.503 #43 DONE cov: 12085 ft: 15503 corp: 39/949b lim: 35 exec/s: 21 rss: 71Mb 00:07:23.503 ###### Recommended dictionary. ###### 00:07:23.503 "\017\000\000\000\000\000\000\000" # Uses: 0 00:07:23.503 ###### End of recommended dictionary. ###### 00:07:23.503 Done 43 runs in 2 second(s) 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4405 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:23.762 19:55:11 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:07:23.762 [2024-07-13 19:55:11.248523] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:23.762 [2024-07-13 19:55:11.248591] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3675818 ] 00:07:23.762 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.021 [2024-07-13 19:55:11.430437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.021 [2024-07-13 19:55:11.452059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.021 [2024-07-13 19:55:11.504280] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.021 [2024-07-13 19:55:11.520585] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:24.021 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.021 INFO: Seed: 1655052055 00:07:24.021 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:24.021 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:24.021 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:24.021 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.021 #2 INITED exec/s: 0 rss: 63Mb 00:07:24.021 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.022 This may also happen if the target rejected all inputs we tried so far 00:07:24.022 [2024-07-13 19:55:11.576275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.022 [2024-07-13 19:55:11.576304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.022 [2024-07-13 19:55:11.576356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.022 [2024-07-13 19:55:11.576372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.022 [2024-07-13 19:55:11.576423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.022 [2024-07-13 19:55:11.576436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.022 [2024-07-13 19:55:11.576495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.022 [2024-07-13 19:55:11.576508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.282 NEW_FUNC[1/692]: 0x49bcb0 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:24.282 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:24.282 #12 NEW cov: 11852 ft: 11846 corp: 2/38b lim: 45 exec/s: 0 rss: 70Mb L: 37/37 MS: 5 ChangeBit-CrossOver-CopyPart-EraseBytes-InsertRepeatedBytes- 00:07:24.282 [2024-07-13 19:55:11.886755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3dff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.282 [2024-07-13 19:55:11.886790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.282 [2024-07-13 19:55:11.886844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.282 [2024-07-13 19:55:11.886858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.282 #14 NEW cov: 11982 ft: 12834 corp: 3/58b lim: 45 exec/s: 0 rss: 70Mb L: 20/37 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:24.282 [2024-07-13 19:55:11.926943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.282 [2024-07-13 19:55:11.926971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.282 [2024-07-13 19:55:11.927038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.282 [2024-07-13 19:55:11.927052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.282 [2024-07-13 19:55:11.927104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.282 [2024-07-13 19:55:11.927117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.542 #16 NEW cov: 11988 ft: 13290 corp: 4/91b lim: 45 exec/s: 0 rss: 70Mb L: 33/37 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:24.542 [2024-07-13 19:55:11.966852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.542 [2024-07-13 19:55:11.966877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.542 [2024-07-13 19:55:11.966929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.542 [2024-07-13 19:55:11.966943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.542 #17 NEW cov: 12073 ft: 13604 corp: 5/117b lim: 45 exec/s: 0 rss: 70Mb L: 26/37 MS: 1 EraseBytes- 00:07:24.542 [2024-07-13 19:55:12.017033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.542 [2024-07-13 19:55:12.017057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.542 [2024-07-13 19:55:12.017111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.542 [2024-07-13 19:55:12.017125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.542 #18 NEW cov: 12073 ft: 13672 corp: 6/135b lim: 45 exec/s: 0 rss: 70Mb L: 18/37 MS: 1 EraseBytes- 00:07:24.542 [2024-07-13 19:55:12.056983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fefefefe cdw11:fefe0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.542 [2024-07-13 19:55:12.057007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.542 #19 NEW cov: 12073 ft: 14420 corp: 7/144b lim: 45 exec/s: 0 rss: 70Mb L: 9/37 MS: 1 InsertRepeatedBytes- 00:07:24.542 [2024-07-13 19:55:12.097248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.542 [2024-07-13 19:55:12.097272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.542 [2024-07-13 19:55:12.097324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.542 [2024-07-13 19:55:12.097338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.542 #20 NEW cov: 12073 ft: 14478 corp: 8/162b lim: 45 exec/s: 0 rss: 70Mb L: 18/37 MS: 1 ShuffleBytes- 00:07:24.542 [2024-07-13 19:55:12.147392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71ef7171 cdw11:c20f0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.542 [2024-07-13 19:55:12.147418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.542 [2024-07-13 19:55:12.147474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:71710000 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.542 [2024-07-13 19:55:12.147488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.542 #21 NEW cov: 12073 ft: 14504 corp: 9/180b lim: 45 exec/s: 0 rss: 70Mb L: 18/37 MS: 1 CMP- DE: "\357\302\017\204\030\177\000\000"- 00:07:24.542 [2024-07-13 19:55:12.197715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.542 [2024-07-13 19:55:12.197741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.542 [2024-07-13 19:55:12.197795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.542 [2024-07-13 19:55:12.197809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.542 [2024-07-13 19:55:12.197861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:71710000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.542 [2024-07-13 19:55:12.197875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.802 #22 NEW cov: 12073 ft: 14543 corp: 10/210b lim: 45 exec/s: 0 rss: 70Mb L: 30/37 MS: 1 CMP- DE: "\376\377\377\365"- 00:07:24.802 [2024-07-13 19:55:12.248115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:efc20000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.802 [2024-07-13 19:55:12.248142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.802 [2024-07-13 19:55:12.248210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00ff7f00 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.802 [2024-07-13 19:55:12.248225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.802 [2024-07-13 19:55:12.248276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.802 [2024-07-13 19:55:12.248290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.802 [2024-07-13 19:55:12.248341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.802 [2024-07-13 19:55:12.248355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.802 [2024-07-13 19:55:12.248404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.802 [2024-07-13 19:55:12.248417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.802 #23 NEW cov: 12073 ft: 14692 corp: 11/255b lim: 45 exec/s: 0 rss: 70Mb L: 45/45 MS: 1 PersAutoDict- DE: "\357\302\017\204\030\177\000\000"- 00:07:24.802 [2024-07-13 19:55:12.297976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.802 [2024-07-13 19:55:12.298003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.802 [2024-07-13 19:55:12.298053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.802 [2024-07-13 19:55:12.298067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.802 [2024-07-13 19:55:12.298118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.802 [2024-07-13 19:55:12.298130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.802 #24 NEW cov: 12073 ft: 14735 corp: 12/282b lim: 45 exec/s: 0 rss: 70Mb L: 27/45 MS: 1 EraseBytes- 00:07:24.802 [2024-07-13 19:55:12.338062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.802 [2024-07-13 19:55:12.338087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.802 [2024-07-13 19:55:12.338141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.802 [2024-07-13 19:55:12.338154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.802 [2024-07-13 19:55:12.338206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:592efcf0 cdw11:5bb40001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.802 [2024-07-13 19:55:12.338221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.802 #25 NEW cov: 12073 ft: 14800 corp: 13/317b lim: 45 exec/s: 0 rss: 70Mb L: 35/45 MS: 1 CMP- DE: "\374\360Y.[\264)\000"- 00:07:24.802 [2024-07-13 19:55:12.388102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.802 [2024-07-13 19:55:12.388127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.802 [2024-07-13 19:55:12.388178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.802 [2024-07-13 19:55:12.388192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.802 #26 NEW cov: 12073 ft: 14818 corp: 14/343b lim: 45 exec/s: 0 rss: 70Mb L: 26/45 MS: 1 PersAutoDict- DE: "\376\377\377\365"- 00:07:24.802 [2024-07-13 19:55:12.428165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.802 [2024-07-13 19:55:12.428189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.802 [2024-07-13 19:55:12.428243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.802 [2024-07-13 19:55:12.428256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.802 #27 NEW cov: 12073 ft: 14842 corp: 15/361b lim: 45 exec/s: 0 rss: 70Mb L: 18/45 MS: 1 CopyPart- 00:07:25.062 [2024-07-13 19:55:12.468600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.468625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.062 [2024-07-13 19:55:12.468680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.468694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.062 [2024-07-13 19:55:12.468745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.468758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.062 [2024-07-13 19:55:12.468811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.468824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.062 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:25.062 #28 NEW cov: 12096 ft: 14889 corp: 16/398b lim: 45 exec/s: 0 rss: 70Mb L: 37/45 MS: 1 ShuffleBytes- 00:07:25.062 [2024-07-13 19:55:12.508462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.508487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.062 [2024-07-13 19:55:12.508541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:60717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.508555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.062 #29 NEW cov: 12096 ft: 14894 corp: 17/424b lim: 45 exec/s: 0 rss: 70Mb L: 26/45 MS: 1 ChangeByte- 00:07:25.062 [2024-07-13 19:55:12.548385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fefefefe cdw11:fefe0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.548412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.062 #30 NEW cov: 12096 ft: 14953 corp: 18/433b lim: 45 exec/s: 30 rss: 70Mb L: 9/45 MS: 1 ChangeByte- 00:07:25.062 [2024-07-13 19:55:12.599007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.599032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.062 [2024-07-13 19:55:12.599084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.599097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.062 [2024-07-13 19:55:12.599149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.599163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.062 [2024-07-13 19:55:12.599215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffff3d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.599228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.062 #31 NEW cov: 12096 ft: 14966 corp: 19/471b lim: 45 exec/s: 31 rss: 70Mb L: 38/45 MS: 1 InsertByte- 00:07:25.062 [2024-07-13 19:55:12.639268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.639294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.062 [2024-07-13 19:55:12.639361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.639375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.062 [2024-07-13 19:55:12.639426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.639439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.062 [2024-07-13 19:55:12.639496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.639510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.062 [2024-07-13 19:55:12.639563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.639577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.062 #32 NEW cov: 12096 ft: 14982 corp: 20/516b lim: 45 exec/s: 32 rss: 70Mb L: 45/45 MS: 1 CopyPart- 00:07:25.062 [2024-07-13 19:55:12.689095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.689119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.062 [2024-07-13 19:55:12.689173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.689190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.062 [2024-07-13 19:55:12.689257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:0afe0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.062 [2024-07-13 19:55:12.689271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.327 #33 NEW cov: 12096 ft: 14989 corp: 21/544b lim: 45 exec/s: 33 rss: 71Mb L: 28/45 MS: 1 EraseBytes- 00:07:25.327 [2024-07-13 19:55:12.739115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.327 [2024-07-13 19:55:12.739140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.327 [2024-07-13 19:55:12.739194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:187f0f84 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.327 [2024-07-13 19:55:12.739209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.327 #34 NEW cov: 12096 ft: 14994 corp: 22/562b lim: 45 exec/s: 34 rss: 71Mb L: 18/45 MS: 1 CrossOver- 00:07:25.327 [2024-07-13 19:55:12.779206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.327 [2024-07-13 19:55:12.779232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.327 [2024-07-13 19:55:12.779287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.327 [2024-07-13 19:55:12.779300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.327 #35 NEW cov: 12096 ft: 15005 corp: 23/581b lim: 45 exec/s: 35 rss: 71Mb L: 19/45 MS: 1 EraseBytes- 00:07:25.327 [2024-07-13 19:55:12.829366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71fe0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.327 [2024-07-13 19:55:12.829391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.327 [2024-07-13 19:55:12.829449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.327 [2024-07-13 19:55:12.829463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.327 #36 NEW cov: 12096 ft: 15015 corp: 24/607b lim: 45 exec/s: 36 rss: 71Mb L: 26/45 MS: 1 PersAutoDict- DE: "\376\377\377\365"- 00:07:25.327 [2024-07-13 19:55:12.879474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.327 [2024-07-13 19:55:12.879499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.327 [2024-07-13 19:55:12.879550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.327 [2024-07-13 19:55:12.879563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.327 #37 NEW cov: 12096 ft: 15049 corp: 25/627b lim: 45 exec/s: 37 rss: 71Mb L: 20/45 MS: 1 InsertByte- 00:07:25.327 [2024-07-13 19:55:12.929640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c1c11cc1 cdw11:c1c10006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.327 [2024-07-13 19:55:12.929666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.327 [2024-07-13 19:55:12.929721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c1c1c1c1 cdw11:c1c10006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.327 [2024-07-13 19:55:12.929735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.327 #39 NEW cov: 12096 ft: 15060 corp: 26/650b lim: 45 exec/s: 39 rss: 71Mb L: 23/45 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:25.327 [2024-07-13 19:55:12.969868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.327 [2024-07-13 19:55:12.969893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.327 [2024-07-13 19:55:12.969945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.327 [2024-07-13 19:55:12.969959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.327 [2024-07-13 19:55:12.970009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.327 [2024-07-13 19:55:12.970038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.585 #40 NEW cov: 12096 ft: 15078 corp: 27/677b lim: 45 exec/s: 40 rss: 71Mb L: 27/45 MS: 1 CopyPart- 00:07:25.585 [2024-07-13 19:55:13.009834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.585 [2024-07-13 19:55:13.009858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.585 [2024-07-13 19:55:13.009926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:187f0f84 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.585 [2024-07-13 19:55:13.009939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.585 #41 NEW cov: 12096 ft: 15083 corp: 28/695b lim: 45 exec/s: 41 rss: 71Mb L: 18/45 MS: 1 ChangeBinInt- 00:07:25.585 [2024-07-13 19:55:13.060267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.585 [2024-07-13 19:55:13.060292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.585 [2024-07-13 19:55:13.060346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.585 [2024-07-13 19:55:13.060360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.585 [2024-07-13 19:55:13.060414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.585 [2024-07-13 19:55:13.060427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.585 [2024-07-13 19:55:13.060480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffff3d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.585 [2024-07-13 19:55:13.060493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.585 #42 NEW cov: 12096 ft: 15087 corp: 29/733b lim: 45 exec/s: 42 rss: 71Mb L: 38/45 MS: 1 ShuffleBytes- 00:07:25.585 [2024-07-13 19:55:13.110261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c20f3def cdw11:84180003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.585 [2024-07-13 19:55:13.110288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.585 [2024-07-13 19:55:13.110342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.585 [2024-07-13 19:55:13.110354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.585 #43 NEW cov: 12096 ft: 15132 corp: 30/753b lim: 45 exec/s: 43 rss: 71Mb L: 20/45 MS: 1 PersAutoDict- DE: "\357\302\017\204\030\177\000\000"- 00:07:25.585 [2024-07-13 19:55:13.160237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.585 [2024-07-13 19:55:13.160263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.585 [2024-07-13 19:55:13.160316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:7171f571 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.585 [2024-07-13 19:55:13.160329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.585 #44 NEW cov: 12096 ft: 15145 corp: 31/776b lim: 45 exec/s: 44 rss: 72Mb L: 23/45 MS: 1 PersAutoDict- DE: "\376\377\377\365"- 00:07:25.585 [2024-07-13 19:55:13.200656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.586 [2024-07-13 19:55:13.200681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.586 [2024-07-13 19:55:13.200735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.586 [2024-07-13 19:55:13.200748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.586 [2024-07-13 19:55:13.200797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.586 [2024-07-13 19:55:13.200810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.586 [2024-07-13 19:55:13.200864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffff3d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.586 [2024-07-13 19:55:13.200877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.586 #45 NEW cov: 12096 ft: 15160 corp: 32/814b lim: 45 exec/s: 45 rss: 72Mb L: 38/45 MS: 1 ShuffleBytes- 00:07:25.845 [2024-07-13 19:55:13.250501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.845 [2024-07-13 19:55:13.250526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.845 [2024-07-13 19:55:13.250581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:7171f571 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.845 [2024-07-13 19:55:13.250595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.845 #46 NEW cov: 12096 ft: 15172 corp: 33/834b lim: 45 exec/s: 46 rss: 72Mb L: 20/45 MS: 1 CrossOver- 00:07:25.845 [2024-07-13 19:55:13.300651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71ef7171 cdw11:c20f0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.845 [2024-07-13 19:55:13.300676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.845 [2024-07-13 19:55:13.300732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:71710008 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.845 [2024-07-13 19:55:13.300746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.845 #47 NEW cov: 12096 ft: 15180 corp: 34/852b lim: 45 exec/s: 47 rss: 72Mb L: 18/45 MS: 1 ChangeBit- 00:07:25.845 [2024-07-13 19:55:13.350640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fefefefe cdw11:fe150000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.845 [2024-07-13 19:55:13.350665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.845 #48 NEW cov: 12096 ft: 15182 corp: 35/867b lim: 45 exec/s: 48 rss: 72Mb L: 15/45 MS: 1 InsertRepeatedBytes- 00:07:25.845 [2024-07-13 19:55:13.390869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71fe0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.845 [2024-07-13 19:55:13.390894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.845 [2024-07-13 19:55:13.390948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.845 [2024-07-13 19:55:13.390962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.845 #49 NEW cov: 12096 ft: 15198 corp: 36/893b lim: 45 exec/s: 49 rss: 72Mb L: 26/45 MS: 1 ChangeBit- 00:07:25.845 [2024-07-13 19:55:13.441159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.845 [2024-07-13 19:55:13.441185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.845 [2024-07-13 19:55:13.441239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.845 [2024-07-13 19:55:13.441252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.845 [2024-07-13 19:55:13.441305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:71717171 cdw11:0afe0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.845 [2024-07-13 19:55:13.441318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.845 #50 NEW cov: 12096 ft: 15204 corp: 37/921b lim: 45 exec/s: 50 rss: 72Mb L: 28/45 MS: 1 CopyPart- 00:07:25.845 [2024-07-13 19:55:13.491148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:71710671 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.846 [2024-07-13 19:55:13.491173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.846 [2024-07-13 19:55:13.491227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:7171fff5 cdw11:71710003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.846 [2024-07-13 19:55:13.491241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.105 #51 NEW cov: 12096 ft: 15234 corp: 38/942b lim: 45 exec/s: 51 rss: 72Mb L: 21/45 MS: 1 InsertByte- 00:07:26.105 [2024-07-13 19:55:13.541140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0f847171 cdw11:187f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:26.105 [2024-07-13 19:55:13.541164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.105 #52 NEW cov: 12096 ft: 15239 corp: 39/957b lim: 45 exec/s: 26 rss: 72Mb L: 15/45 MS: 1 EraseBytes- 00:07:26.105 #52 DONE cov: 12096 ft: 15239 corp: 39/957b lim: 45 exec/s: 26 rss: 72Mb 00:07:26.105 ###### Recommended dictionary. ###### 00:07:26.105 "\357\302\017\204\030\177\000\000" # Uses: 2 00:07:26.105 "\376\377\377\365" # Uses: 3 00:07:26.105 "\374\360Y.[\264)\000" # Uses: 0 00:07:26.105 ###### End of recommended dictionary. ###### 00:07:26.105 Done 52 runs in 2 second(s) 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:26.105 19:55:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:26.105 [2024-07-13 19:55:13.720096] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:26.105 [2024-07-13 19:55:13.720176] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3676204 ] 00:07:26.105 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.364 [2024-07-13 19:55:13.895501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.364 [2024-07-13 19:55:13.917421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.364 [2024-07-13 19:55:13.969707] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.364 [2024-07-13 19:55:13.986008] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:26.364 INFO: Running with entropic power schedule (0xFF, 100). 00:07:26.364 INFO: Seed: 4121058819 00:07:26.623 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:26.623 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:26.623 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:26.623 INFO: A corpus is not provided, starting from an empty corpus 00:07:26.623 #2 INITED exec/s: 0 rss: 62Mb 00:07:26.623 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:26.623 This may also happen if the target rejected all inputs we tried so far 00:07:26.623 [2024-07-13 19:55:14.052278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f01 cdw11:00000000 00:07:26.623 [2024-07-13 19:55:14.052317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.883 NEW_FUNC[1/690]: 0x49e4c0 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:26.883 NEW_FUNC[2/690]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:26.883 #4 NEW cov: 11768 ft: 11769 corp: 2/3b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 2 ChangeBinInt-InsertByte- 00:07:26.883 [2024-07-13 19:55:14.403649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f01 cdw11:00000000 00:07:26.883 [2024-07-13 19:55:14.403689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.883 [2024-07-13 19:55:14.403807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:26.883 [2024-07-13 19:55:14.403823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.883 [2024-07-13 19:55:14.403938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:26.883 [2024-07-13 19:55:14.403955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.883 [2024-07-13 19:55:14.404069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:26.883 [2024-07-13 19:55:14.404087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.883 [2024-07-13 19:55:14.404204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:26.883 [2024-07-13 19:55:14.404221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:26.883 #5 NEW cov: 11899 ft: 12637 corp: 3/13b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:26.883 [2024-07-13 19:55:14.462619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:26.883 [2024-07-13 19:55:14.462646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.883 #6 NEW cov: 11905 ft: 12974 corp: 4/15b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 1 CopyPart- 00:07:26.883 [2024-07-13 19:55:14.503039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a0a cdw11:00000000 00:07:26.883 [2024-07-13 19:55:14.503067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.883 #7 NEW cov: 11990 ft: 13288 corp: 5/17b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 1 ChangeBit- 00:07:27.142 [2024-07-13 19:55:14.552777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ba0a cdw11:00000000 00:07:27.142 [2024-07-13 19:55:14.552804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.142 #8 NEW cov: 11990 ft: 13467 corp: 6/19b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 1 ChangeByte- 00:07:27.142 [2024-07-13 19:55:14.593276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f0a cdw11:00000000 00:07:27.142 [2024-07-13 19:55:14.593304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.142 #9 NEW cov: 11990 ft: 13515 corp: 7/22b lim: 10 exec/s: 0 rss: 70Mb L: 3/10 MS: 1 InsertByte- 00:07:27.142 [2024-07-13 19:55:14.643787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ba0a cdw11:00000000 00:07:27.142 [2024-07-13 19:55:14.643813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.142 [2024-07-13 19:55:14.643926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003434 cdw11:00000000 00:07:27.142 [2024-07-13 19:55:14.643943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.142 [2024-07-13 19:55:14.644063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003434 cdw11:00000000 00:07:27.142 [2024-07-13 19:55:14.644080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.142 #10 NEW cov: 11990 ft: 13755 corp: 8/28b lim: 10 exec/s: 0 rss: 70Mb L: 6/10 MS: 1 InsertRepeatedBytes- 00:07:27.142 [2024-07-13 19:55:14.693545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000340a cdw11:00000000 00:07:27.142 [2024-07-13 19:55:14.693572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.142 #11 NEW cov: 11990 ft: 13876 corp: 9/30b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 1 CrossOver- 00:07:27.142 [2024-07-13 19:55:14.733621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000125 cdw11:00000000 00:07:27.142 [2024-07-13 19:55:14.733646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.142 #13 NEW cov: 11990 ft: 13902 corp: 10/32b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 2 EraseBytes-InsertByte- 00:07:27.142 [2024-07-13 19:55:14.773777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000270a cdw11:00000000 00:07:27.142 [2024-07-13 19:55:14.773803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.401 #18 NEW cov: 11990 ft: 13947 corp: 11/34b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 5 EraseBytes-ChangeBit-ChangeBit-CrossOver-InsertByte- 00:07:27.402 [2024-07-13 19:55:14.824580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000d8d8 cdw11:00000000 00:07:27.402 [2024-07-13 19:55:14.824607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.402 [2024-07-13 19:55:14.824734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000d8d8 cdw11:00000000 00:07:27.402 [2024-07-13 19:55:14.824752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.402 [2024-07-13 19:55:14.824863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000d8d8 cdw11:00000000 00:07:27.402 [2024-07-13 19:55:14.824880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.402 [2024-07-13 19:55:14.824993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000d82a cdw11:00000000 00:07:27.402 [2024-07-13 19:55:14.825008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.402 #19 NEW cov: 11990 ft: 13981 corp: 12/43b lim: 10 exec/s: 0 rss: 70Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:27.402 [2024-07-13 19:55:14.874052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000340a cdw11:00000000 00:07:27.402 [2024-07-13 19:55:14.874078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.402 #20 NEW cov: 11990 ft: 13982 corp: 13/46b lim: 10 exec/s: 0 rss: 70Mb L: 3/10 MS: 1 InsertByte- 00:07:27.402 [2024-07-13 19:55:14.924817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:27.402 [2024-07-13 19:55:14.924842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.402 [2024-07-13 19:55:14.924968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:27.402 [2024-07-13 19:55:14.924985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.402 [2024-07-13 19:55:14.925109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:27.402 [2024-07-13 19:55:14.925125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.402 [2024-07-13 19:55:14.925239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:27.402 [2024-07-13 19:55:14.925256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.402 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:27.402 #21 NEW cov: 12013 ft: 14042 corp: 14/54b lim: 10 exec/s: 0 rss: 70Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:07:27.402 [2024-07-13 19:55:14.964720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ba0a cdw11:00000000 00:07:27.402 [2024-07-13 19:55:14.964747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.402 [2024-07-13 19:55:14.964879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003c34 cdw11:00000000 00:07:27.402 [2024-07-13 19:55:14.964896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.402 [2024-07-13 19:55:14.965017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00003434 cdw11:00000000 00:07:27.402 [2024-07-13 19:55:14.965034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.402 #22 NEW cov: 12013 ft: 14104 corp: 15/60b lim: 10 exec/s: 0 rss: 70Mb L: 6/10 MS: 1 ChangeBit- 00:07:27.402 [2024-07-13 19:55:15.014485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e34 cdw11:00000000 00:07:27.402 [2024-07-13 19:55:15.014511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.402 #23 NEW cov: 12013 ft: 14136 corp: 16/63b lim: 10 exec/s: 23 rss: 70Mb L: 3/10 MS: 1 InsertByte- 00:07:27.402 [2024-07-13 19:55:15.054565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f7e cdw11:00000000 00:07:27.402 [2024-07-13 19:55:15.054592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.661 #24 NEW cov: 12013 ft: 14172 corp: 17/66b lim: 10 exec/s: 24 rss: 70Mb L: 3/10 MS: 1 InsertByte- 00:07:27.661 [2024-07-13 19:55:15.095346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000125 cdw11:00000000 00:07:27.661 [2024-07-13 19:55:15.095371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.661 [2024-07-13 19:55:15.095493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.661 [2024-07-13 19:55:15.095513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.661 [2024-07-13 19:55:15.095627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.661 [2024-07-13 19:55:15.095642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.661 [2024-07-13 19:55:15.095758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.661 [2024-07-13 19:55:15.095776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.661 [2024-07-13 19:55:15.095885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.661 [2024-07-13 19:55:15.095901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:27.661 #25 NEW cov: 12013 ft: 14182 corp: 18/76b lim: 10 exec/s: 25 rss: 70Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:27.661 [2024-07-13 19:55:15.135418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000abb cdw11:00000000 00:07:27.661 [2024-07-13 19:55:15.135448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.661 [2024-07-13 19:55:15.135562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000bbbb cdw11:00000000 00:07:27.661 [2024-07-13 19:55:15.135578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.661 [2024-07-13 19:55:15.135696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000bbbb cdw11:00000000 00:07:27.661 [2024-07-13 19:55:15.135712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.661 [2024-07-13 19:55:15.135829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000bbbb cdw11:00000000 00:07:27.661 [2024-07-13 19:55:15.135844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.661 #27 NEW cov: 12013 ft: 14208 corp: 19/85b lim: 10 exec/s: 27 rss: 70Mb L: 9/10 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:27.661 [2024-07-13 19:55:15.194992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f0a cdw11:00000000 00:07:27.661 [2024-07-13 19:55:15.195019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.661 #28 NEW cov: 12013 ft: 14218 corp: 20/88b lim: 10 exec/s: 28 rss: 70Mb L: 3/10 MS: 1 CrossOver- 00:07:27.661 [2024-07-13 19:55:15.245745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:27.661 [2024-07-13 19:55:15.245772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.661 [2024-07-13 19:55:15.245895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000041ff cdw11:00000000 00:07:27.661 [2024-07-13 19:55:15.245913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.661 [2024-07-13 19:55:15.246027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:27.661 [2024-07-13 19:55:15.246042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.661 [2024-07-13 19:55:15.246152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:27.661 [2024-07-13 19:55:15.246168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.661 #29 NEW cov: 12013 ft: 14228 corp: 21/96b lim: 10 exec/s: 29 rss: 70Mb L: 8/10 MS: 1 ChangeByte- 00:07:27.661 [2024-07-13 19:55:15.285528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f01 cdw11:00000000 00:07:27.662 [2024-07-13 19:55:15.285553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.662 [2024-07-13 19:55:15.285666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:27.662 [2024-07-13 19:55:15.285684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.662 [2024-07-13 19:55:15.285796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:27.662 [2024-07-13 19:55:15.285814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.662 [2024-07-13 19:55:15.285920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.662 [2024-07-13 19:55:15.285937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.662 [2024-07-13 19:55:15.286048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000000ff cdw11:00000000 00:07:27.662 [2024-07-13 19:55:15.286064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:27.662 #30 NEW cov: 12013 ft: 14341 corp: 22/106b lim: 10 exec/s: 30 rss: 71Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:27.921 [2024-07-13 19:55:15.336145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:27.921 [2024-07-13 19:55:15.336173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.921 [2024-07-13 19:55:15.336286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:27.921 [2024-07-13 19:55:15.336304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.921 [2024-07-13 19:55:15.336411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:27.921 [2024-07-13 19:55:15.336427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.921 [2024-07-13 19:55:15.336541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:27.921 [2024-07-13 19:55:15.336560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.921 [2024-07-13 19:55:15.336678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:27.921 [2024-07-13 19:55:15.336694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:27.921 #31 NEW cov: 12013 ft: 14353 corp: 23/116b lim: 10 exec/s: 31 rss: 71Mb L: 10/10 MS: 1 CopyPart- 00:07:27.921 [2024-07-13 19:55:15.376224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ba0a cdw11:00000000 00:07:27.921 [2024-07-13 19:55:15.376252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.921 [2024-07-13 19:55:15.376367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003c34 cdw11:00000000 00:07:27.921 [2024-07-13 19:55:15.376384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.921 [2024-07-13 19:55:15.376497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000034ca cdw11:00000000 00:07:27.921 [2024-07-13 19:55:15.376514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.921 [2024-07-13 19:55:15.376618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000caca cdw11:00000000 00:07:27.921 [2024-07-13 19:55:15.376634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.921 [2024-07-13 19:55:15.376743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ca34 cdw11:00000000 00:07:27.921 [2024-07-13 19:55:15.376758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:27.921 #32 NEW cov: 12013 ft: 14368 corp: 24/126b lim: 10 exec/s: 32 rss: 71Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:27.921 [2024-07-13 19:55:15.425791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ba0a cdw11:00000000 00:07:27.921 [2024-07-13 19:55:15.425817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.921 [2024-07-13 19:55:15.425935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000125 cdw11:00000000 00:07:27.921 [2024-07-13 19:55:15.425950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.921 #33 NEW cov: 12013 ft: 14528 corp: 25/130b lim: 10 exec/s: 33 rss: 71Mb L: 4/10 MS: 1 CrossOver- 00:07:27.921 [2024-07-13 19:55:15.466214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000340a cdw11:00000000 00:07:27.921 [2024-07-13 19:55:15.466240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.921 [2024-07-13 19:55:15.466356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.922 [2024-07-13 19:55:15.466374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.922 [2024-07-13 19:55:15.466488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.922 [2024-07-13 19:55:15.466504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.922 [2024-07-13 19:55:15.466619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.922 [2024-07-13 19:55:15.466635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.922 #34 NEW cov: 12013 ft: 14546 corp: 26/138b lim: 10 exec/s: 34 rss: 71Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:07:27.922 [2024-07-13 19:55:15.506688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00001f1f cdw11:00000000 00:07:27.922 [2024-07-13 19:55:15.506715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.922 [2024-07-13 19:55:15.506824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00001f1f cdw11:00000000 00:07:27.922 [2024-07-13 19:55:15.506850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.922 [2024-07-13 19:55:15.506966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00001f1f cdw11:00000000 00:07:27.922 [2024-07-13 19:55:15.506982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.922 [2024-07-13 19:55:15.507094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00001f1f cdw11:00000000 00:07:27.922 [2024-07-13 19:55:15.507109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.922 [2024-07-13 19:55:15.507223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000125 cdw11:00000000 00:07:27.922 [2024-07-13 19:55:15.507239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:27.922 #35 NEW cov: 12013 ft: 14558 corp: 27/148b lim: 10 exec/s: 35 rss: 71Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:27.922 [2024-07-13 19:55:15.555976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000baba cdw11:00000000 00:07:27.922 [2024-07-13 19:55:15.556003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.922 #36 NEW cov: 12013 ft: 14563 corp: 28/150b lim: 10 exec/s: 36 rss: 71Mb L: 2/10 MS: 1 CopyPart- 00:07:28.181 [2024-07-13 19:55:15.606158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.606186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.181 #37 NEW cov: 12013 ft: 14585 corp: 29/152b lim: 10 exec/s: 37 rss: 71Mb L: 2/10 MS: 1 EraseBytes- 00:07:28.181 [2024-07-13 19:55:15.656697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ba0a cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.656723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.181 [2024-07-13 19:55:15.656834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00003c34 cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.656850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.181 [2024-07-13 19:55:15.656969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000034ca cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.656987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.181 [2024-07-13 19:55:15.657104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000cac8 cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.657122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.181 [2024-07-13 19:55:15.657229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ca34 cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.657246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:28.181 #38 NEW cov: 12013 ft: 14625 corp: 30/162b lim: 10 exec/s: 38 rss: 71Mb L: 10/10 MS: 1 ChangeBit- 00:07:28.181 [2024-07-13 19:55:15.716500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00006d0a cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.716527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.181 #39 NEW cov: 12013 ft: 14638 corp: 31/164b lim: 10 exec/s: 39 rss: 71Mb L: 2/10 MS: 1 InsertByte- 00:07:28.181 [2024-07-13 19:55:15.757494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f01 cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.757520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.181 [2024-07-13 19:55:15.757636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.757653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.181 [2024-07-13 19:55:15.757774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.757791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.181 [2024-07-13 19:55:15.757897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.757911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.181 [2024-07-13 19:55:15.758026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.758042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:28.181 #40 NEW cov: 12013 ft: 14724 corp: 32/174b lim: 10 exec/s: 40 rss: 71Mb L: 10/10 MS: 1 ChangeByte- 00:07:28.181 [2024-07-13 19:55:15.807596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.807624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.181 [2024-07-13 19:55:15.807740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000dfff cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.807757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.181 [2024-07-13 19:55:15.807878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.807894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.181 [2024-07-13 19:55:15.808012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.808030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.181 [2024-07-13 19:55:15.808152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.181 [2024-07-13 19:55:15.808168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:28.181 #41 NEW cov: 12013 ft: 14734 corp: 33/184b lim: 10 exec/s: 41 rss: 71Mb L: 10/10 MS: 1 ChangeBit- 00:07:28.441 [2024-07-13 19:55:15.857322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:28.441 [2024-07-13 19:55:15.857349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.441 [2024-07-13 19:55:15.857462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000041ff cdw11:00000000 00:07:28.441 [2024-07-13 19:55:15.857481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.441 [2024-07-13 19:55:15.857596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.441 [2024-07-13 19:55:15.857612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.441 [2024-07-13 19:55:15.857728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.441 [2024-07-13 19:55:15.857744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.441 #42 NEW cov: 12013 ft: 14737 corp: 34/193b lim: 10 exec/s: 42 rss: 71Mb L: 9/10 MS: 1 CopyPart- 00:07:28.441 [2024-07-13 19:55:15.917624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a01 cdw11:00000000 00:07:28.441 [2024-07-13 19:55:15.917650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.441 [2024-07-13 19:55:15.917771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.441 [2024-07-13 19:55:15.917789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.441 [2024-07-13 19:55:15.917900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.441 [2024-07-13 19:55:15.917921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.441 [2024-07-13 19:55:15.918033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.441 [2024-07-13 19:55:15.918051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.441 #43 NEW cov: 12013 ft: 14745 corp: 35/201b lim: 10 exec/s: 43 rss: 71Mb L: 8/10 MS: 1 CrossOver- 00:07:28.441 [2024-07-13 19:55:15.957081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002e0a cdw11:00000000 00:07:28.441 [2024-07-13 19:55:15.957108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.441 #44 NEW cov: 12013 ft: 14754 corp: 36/203b lim: 10 exec/s: 44 rss: 71Mb L: 2/10 MS: 1 EraseBytes- 00:07:28.441 [2024-07-13 19:55:16.017724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:28.441 [2024-07-13 19:55:16.017751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.441 [2024-07-13 19:55:16.017875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:28.441 [2024-07-13 19:55:16.017894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.441 [2024-07-13 19:55:16.018009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000000ba cdw11:00000000 00:07:28.441 [2024-07-13 19:55:16.018025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.441 #45 NEW cov: 12013 ft: 14759 corp: 37/210b lim: 10 exec/s: 22 rss: 72Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:07:28.441 #45 DONE cov: 12013 ft: 14759 corp: 37/210b lim: 10 exec/s: 22 rss: 72Mb 00:07:28.441 Done 45 runs in 2 second(s) 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:28.701 19:55:16 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:28.701 [2024-07-13 19:55:16.194974] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:28.701 [2024-07-13 19:55:16.195042] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3676733 ] 00:07:28.701 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.961 [2024-07-13 19:55:16.374219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.961 [2024-07-13 19:55:16.396050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.961 [2024-07-13 19:55:16.448231] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.961 [2024-07-13 19:55:16.464503] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:28.961 INFO: Running with entropic power schedule (0xFF, 100). 00:07:28.961 INFO: Seed: 2304086755 00:07:28.961 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:28.961 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:28.961 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:28.961 INFO: A corpus is not provided, starting from an empty corpus 00:07:28.961 #2 INITED exec/s: 0 rss: 62Mb 00:07:28.961 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:28.961 This may also happen if the target rejected all inputs we tried so far 00:07:28.961 [2024-07-13 19:55:16.519979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:28.961 [2024-07-13 19:55:16.520008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.961 [2024-07-13 19:55:16.520060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:28.961 [2024-07-13 19:55:16.520074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.961 [2024-07-13 19:55:16.520124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000c5d cdw11:00000000 00:07:28.961 [2024-07-13 19:55:16.520137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.219 NEW_FUNC[1/689]: 0x49eeb0 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:29.219 NEW_FUNC[2/689]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:29.219 #4 NEW cov: 11767 ft: 11770 corp: 2/7b lim: 10 exec/s: 0 rss: 69Mb L: 6/6 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:29.219 [2024-07-13 19:55:16.850852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.219 [2024-07-13 19:55:16.850896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.219 [2024-07-13 19:55:16.850960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.219 [2024-07-13 19:55:16.850980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.219 [2024-07-13 19:55:16.851042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000c5d cdw11:00000000 00:07:29.219 [2024-07-13 19:55:16.851061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.477 NEW_FUNC[1/1]: 0x134c500 in nvmf_transport_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:727 00:07:29.477 #5 NEW cov: 11899 ft: 12421 corp: 3/13b lim: 10 exec/s: 0 rss: 69Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:29.477 [2024-07-13 19:55:16.910993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.477 [2024-07-13 19:55:16.911018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.477 [2024-07-13 19:55:16.911085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff0c cdw11:00000000 00:07:29.477 [2024-07-13 19:55:16.911099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.477 [2024-07-13 19:55:16.911148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.477 [2024-07-13 19:55:16.911161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.477 [2024-07-13 19:55:16.911211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.477 [2024-07-13 19:55:16.911224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.477 #6 NEW cov: 11905 ft: 12896 corp: 4/22b lim: 10 exec/s: 0 rss: 69Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:29.477 [2024-07-13 19:55:16.960883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.477 [2024-07-13 19:55:16.960908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.477 [2024-07-13 19:55:16.960959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000c5d cdw11:00000000 00:07:29.477 [2024-07-13 19:55:16.960972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.477 #7 NEW cov: 11990 ft: 13326 corp: 5/26b lim: 10 exec/s: 0 rss: 69Mb L: 4/9 MS: 1 EraseBytes- 00:07:29.477 [2024-07-13 19:55:17.001193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.477 [2024-07-13 19:55:17.001217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.477 [2024-07-13 19:55:17.001284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.477 [2024-07-13 19:55:17.001298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.477 [2024-07-13 19:55:17.001348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.477 [2024-07-13 19:55:17.001361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.477 #8 NEW cov: 11990 ft: 13451 corp: 6/33b lim: 10 exec/s: 0 rss: 70Mb L: 7/9 MS: 1 EraseBytes- 00:07:29.477 [2024-07-13 19:55:17.051037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a40 cdw11:00000000 00:07:29.477 [2024-07-13 19:55:17.051062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.477 #9 NEW cov: 11990 ft: 13761 corp: 7/35b lim: 10 exec/s: 0 rss: 70Mb L: 2/9 MS: 1 InsertByte- 00:07:29.477 [2024-07-13 19:55:17.091525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.477 [2024-07-13 19:55:17.091550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.477 [2024-07-13 19:55:17.091600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e9e9 cdw11:00000000 00:07:29.477 [2024-07-13 19:55:17.091616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.477 [2024-07-13 19:55:17.091666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e90c cdw11:00000000 00:07:29.477 [2024-07-13 19:55:17.091679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.477 [2024-07-13 19:55:17.091729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.477 [2024-07-13 19:55:17.091741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.477 #10 NEW cov: 11990 ft: 13808 corp: 8/44b lim: 10 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:29.477 [2024-07-13 19:55:17.131471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000c5d cdw11:00000000 00:07:29.477 [2024-07-13 19:55:17.131496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.477 [2024-07-13 19:55:17.131550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000cff cdw11:00000000 00:07:29.477 [2024-07-13 19:55:17.131564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.477 [2024-07-13 19:55:17.131614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0c cdw11:00000000 00:07:29.477 [2024-07-13 19:55:17.131644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.735 #11 NEW cov: 11990 ft: 13824 corp: 9/51b lim: 10 exec/s: 0 rss: 70Mb L: 7/9 MS: 1 ShuffleBytes- 00:07:29.735 [2024-07-13 19:55:17.181870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.735 [2024-07-13 19:55:17.181895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.735 [2024-07-13 19:55:17.181947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000cff cdw11:00000000 00:07:29.735 [2024-07-13 19:55:17.181960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.735 [2024-07-13 19:55:17.182009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0c cdw11:00000000 00:07:29.735 [2024-07-13 19:55:17.182023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.735 [2024-07-13 19:55:17.182072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.735 [2024-07-13 19:55:17.182084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.735 [2024-07-13 19:55:17.182133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000c5d cdw11:00000000 00:07:29.735 [2024-07-13 19:55:17.182146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:29.735 #12 NEW cov: 11990 ft: 13878 corp: 10/61b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:07:29.735 [2024-07-13 19:55:17.221533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:29.735 [2024-07-13 19:55:17.221559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.735 #14 NEW cov: 11990 ft: 13920 corp: 11/63b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 2 ShuffleBytes-CopyPart- 00:07:29.735 [2024-07-13 19:55:17.262053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:29.735 [2024-07-13 19:55:17.262082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.735 [2024-07-13 19:55:17.262134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:00000000 00:07:29.735 [2024-07-13 19:55:17.262147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.735 [2024-07-13 19:55:17.262200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0c cdw11:00000000 00:07:29.735 [2024-07-13 19:55:17.262213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.735 [2024-07-13 19:55:17.262263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.735 [2024-07-13 19:55:17.262276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.735 [2024-07-13 19:55:17.262324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00000c5d cdw11:00000000 00:07:29.735 [2024-07-13 19:55:17.262337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:29.735 #15 NEW cov: 11990 ft: 13941 corp: 12/73b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:29.735 [2024-07-13 19:55:17.311786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000af7 cdw11:00000000 00:07:29.735 [2024-07-13 19:55:17.311811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.735 #16 NEW cov: 11990 ft: 13970 corp: 13/75b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 1 ChangeByte- 00:07:29.735 [2024-07-13 19:55:17.361901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:07:29.735 [2024-07-13 19:55:17.361926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.993 #17 NEW cov: 11990 ft: 14009 corp: 14/77b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 1 CopyPart- 00:07:29.993 [2024-07-13 19:55:17.412188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.993 [2024-07-13 19:55:17.412214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.993 [2024-07-13 19:55:17.412265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.993 [2024-07-13 19:55:17.412279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.993 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:29.993 #18 NEW cov: 12013 ft: 14045 corp: 15/81b lim: 10 exec/s: 0 rss: 70Mb L: 4/10 MS: 1 EraseBytes- 00:07:29.993 [2024-07-13 19:55:17.462647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.993 [2024-07-13 19:55:17.462672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.993 [2024-07-13 19:55:17.462725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e9e9 cdw11:00000000 00:07:29.993 [2024-07-13 19:55:17.462738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.993 [2024-07-13 19:55:17.462789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e90c cdw11:00000000 00:07:29.993 [2024-07-13 19:55:17.462802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.993 [2024-07-13 19:55:17.462852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.993 [2024-07-13 19:55:17.462868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.993 [2024-07-13 19:55:17.462919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00005de9 cdw11:00000000 00:07:29.993 [2024-07-13 19:55:17.462931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:29.993 #19 NEW cov: 12013 ft: 14052 corp: 16/91b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:07:29.993 [2024-07-13 19:55:17.512324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000077f7 cdw11:00000000 00:07:29.993 [2024-07-13 19:55:17.512348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.993 #20 NEW cov: 12013 ft: 14072 corp: 17/93b lim: 10 exec/s: 20 rss: 70Mb L: 2/10 MS: 1 ChangeBit- 00:07:29.993 [2024-07-13 19:55:17.562719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e9e9 cdw11:00000000 00:07:29.993 [2024-07-13 19:55:17.562744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.993 [2024-07-13 19:55:17.562796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.993 [2024-07-13 19:55:17.562809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.994 [2024-07-13 19:55:17.562861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000c5d cdw11:00000000 00:07:29.994 [2024-07-13 19:55:17.562874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.994 #21 NEW cov: 12013 ft: 14088 corp: 18/100b lim: 10 exec/s: 21 rss: 70Mb L: 7/10 MS: 1 EraseBytes- 00:07:29.994 [2024-07-13 19:55:17.612878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:29.994 [2024-07-13 19:55:17.612902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.994 [2024-07-13 19:55:17.612970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.994 [2024-07-13 19:55:17.612994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.994 [2024-07-13 19:55:17.613043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:29.994 [2024-07-13 19:55:17.613056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.994 #22 NEW cov: 12013 ft: 14104 corp: 19/107b lim: 10 exec/s: 22 rss: 70Mb L: 7/10 MS: 1 EraseBytes- 00:07:30.251 [2024-07-13 19:55:17.662748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:30.251 [2024-07-13 19:55:17.662772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.251 #23 NEW cov: 12013 ft: 14174 corp: 20/110b lim: 10 exec/s: 23 rss: 70Mb L: 3/10 MS: 1 EraseBytes- 00:07:30.251 [2024-07-13 19:55:17.713229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:30.251 [2024-07-13 19:55:17.713253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.251 [2024-07-13 19:55:17.713306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff0c cdw11:00000000 00:07:30.251 [2024-07-13 19:55:17.713319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.251 [2024-07-13 19:55:17.713370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000c0d cdw11:00000000 00:07:30.252 [2024-07-13 19:55:17.713383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.252 [2024-07-13 19:55:17.713433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:30.252 [2024-07-13 19:55:17.713449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.252 #24 NEW cov: 12013 ft: 14189 corp: 21/119b lim: 10 exec/s: 24 rss: 70Mb L: 9/10 MS: 1 ChangeBit- 00:07:30.252 [2024-07-13 19:55:17.753212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000c5d cdw11:00000000 00:07:30.252 [2024-07-13 19:55:17.753237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.252 [2024-07-13 19:55:17.753306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00004cff cdw11:00000000 00:07:30.252 [2024-07-13 19:55:17.753319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.252 [2024-07-13 19:55:17.753373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff0c cdw11:00000000 00:07:30.252 [2024-07-13 19:55:17.753386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.252 #25 NEW cov: 12013 ft: 14200 corp: 22/126b lim: 10 exec/s: 25 rss: 70Mb L: 7/10 MS: 1 ChangeBit- 00:07:30.252 [2024-07-13 19:55:17.793095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a24 cdw11:00000000 00:07:30.252 [2024-07-13 19:55:17.793119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.252 #26 NEW cov: 12013 ft: 14208 corp: 23/128b lim: 10 exec/s: 26 rss: 70Mb L: 2/10 MS: 1 ChangeByte- 00:07:30.252 [2024-07-13 19:55:17.833565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff9d cdw11:00000000 00:07:30.252 [2024-07-13 19:55:17.833590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.252 [2024-07-13 19:55:17.833641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff0c cdw11:00000000 00:07:30.252 [2024-07-13 19:55:17.833655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.252 [2024-07-13 19:55:17.833705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:30.252 [2024-07-13 19:55:17.833718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.252 [2024-07-13 19:55:17.833766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000c5d cdw11:00000000 00:07:30.252 [2024-07-13 19:55:17.833779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.252 #27 NEW cov: 12013 ft: 14231 corp: 24/136b lim: 10 exec/s: 27 rss: 70Mb L: 8/10 MS: 1 InsertByte- 00:07:30.252 [2024-07-13 19:55:17.873678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000faf3 cdw11:00000000 00:07:30.252 [2024-07-13 19:55:17.873703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.252 [2024-07-13 19:55:17.873754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00001616 cdw11:00000000 00:07:30.252 [2024-07-13 19:55:17.873767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.252 [2024-07-13 19:55:17.873820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000016f3 cdw11:00000000 00:07:30.252 [2024-07-13 19:55:17.873834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.252 [2024-07-13 19:55:17.873883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f3f3 cdw11:00000000 00:07:30.252 [2024-07-13 19:55:17.873895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.252 #28 NEW cov: 12013 ft: 14256 corp: 25/145b lim: 10 exec/s: 28 rss: 70Mb L: 9/10 MS: 1 ChangeBinInt- 00:07:30.509 [2024-07-13 19:55:17.913711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000df0a cdw11:00000000 00:07:30.509 [2024-07-13 19:55:17.913735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.509 [2024-07-13 19:55:17.913788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:30.509 [2024-07-13 19:55:17.913802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.509 [2024-07-13 19:55:17.913852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:30.509 [2024-07-13 19:55:17.913865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.509 #29 NEW cov: 12013 ft: 14266 corp: 26/152b lim: 10 exec/s: 29 rss: 70Mb L: 7/10 MS: 1 ChangeBit- 00:07:30.509 [2024-07-13 19:55:17.963944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:30.509 [2024-07-13 19:55:17.963970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.509 [2024-07-13 19:55:17.964022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e9e9 cdw11:00000000 00:07:30.509 [2024-07-13 19:55:17.964035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.509 [2024-07-13 19:55:17.964085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:30.509 [2024-07-13 19:55:17.964098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.509 [2024-07-13 19:55:17.964148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000c5d cdw11:00000000 00:07:30.509 [2024-07-13 19:55:17.964161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.509 #30 NEW cov: 12013 ft: 14324 corp: 27/161b lim: 10 exec/s: 30 rss: 70Mb L: 9/10 MS: 1 CopyPart- 00:07:30.509 [2024-07-13 19:55:18.013891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:07:30.509 [2024-07-13 19:55:18.013915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.509 [2024-07-13 19:55:18.013967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:07:30.509 [2024-07-13 19:55:18.013980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.509 #31 NEW cov: 12013 ft: 14396 corp: 28/165b lim: 10 exec/s: 31 rss: 70Mb L: 4/10 MS: 1 CrossOver- 00:07:30.509 [2024-07-13 19:55:18.054149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000df0a cdw11:00000000 00:07:30.509 [2024-07-13 19:55:18.054173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.510 [2024-07-13 19:55:18.054242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000c01 cdw11:00000000 00:07:30.510 [2024-07-13 19:55:18.054256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.510 [2024-07-13 19:55:18.054305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:30.510 [2024-07-13 19:55:18.054318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.510 #32 NEW cov: 12013 ft: 14406 corp: 29/172b lim: 10 exec/s: 32 rss: 71Mb L: 7/10 MS: 1 CMP- DE: "\001\000\000\000"- 00:07:30.510 [2024-07-13 19:55:18.104232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:30.510 [2024-07-13 19:55:18.104256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.510 [2024-07-13 19:55:18.104308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff0c cdw11:00000000 00:07:30.510 [2024-07-13 19:55:18.104322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.510 [2024-07-13 19:55:18.104374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000c5d cdw11:00000000 00:07:30.510 [2024-07-13 19:55:18.104387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.510 #33 NEW cov: 12013 ft: 14428 corp: 30/178b lim: 10 exec/s: 33 rss: 71Mb L: 6/10 MS: 1 EraseBytes- 00:07:30.510 [2024-07-13 19:55:18.144359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff0c cdw11:00000000 00:07:30.510 [2024-07-13 19:55:18.144384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.510 [2024-07-13 19:55:18.144437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff0c cdw11:00000000 00:07:30.510 [2024-07-13 19:55:18.144454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.510 [2024-07-13 19:55:18.144504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000c5d cdw11:00000000 00:07:30.510 [2024-07-13 19:55:18.144518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.510 #34 NEW cov: 12013 ft: 14437 corp: 31/185b lim: 10 exec/s: 34 rss: 71Mb L: 7/10 MS: 1 ShuffleBytes- 00:07:30.767 [2024-07-13 19:55:18.184561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:30.767 [2024-07-13 19:55:18.184585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.767 [2024-07-13 19:55:18.184637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e9e9 cdw11:00000000 00:07:30.767 [2024-07-13 19:55:18.184651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.767 [2024-07-13 19:55:18.184703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:30.767 [2024-07-13 19:55:18.184732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.767 [2024-07-13 19:55:18.184783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00005d0c cdw11:00000000 00:07:30.767 [2024-07-13 19:55:18.184796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.767 #35 NEW cov: 12013 ft: 14460 corp: 32/194b lim: 10 exec/s: 35 rss: 71Mb L: 9/10 MS: 1 CrossOver- 00:07:30.767 [2024-07-13 19:55:18.224587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff0c cdw11:00000000 00:07:30.767 [2024-07-13 19:55:18.224611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.767 [2024-07-13 19:55:18.224661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff0c cdw11:00000000 00:07:30.767 [2024-07-13 19:55:18.224674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.767 [2024-07-13 19:55:18.224727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000cce cdw11:00000000 00:07:30.767 [2024-07-13 19:55:18.224740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.767 #36 NEW cov: 12013 ft: 14524 corp: 33/201b lim: 10 exec/s: 36 rss: 71Mb L: 7/10 MS: 1 ChangeByte- 00:07:30.767 [2024-07-13 19:55:18.274517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000400a cdw11:00000000 00:07:30.767 [2024-07-13 19:55:18.274543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.767 #37 NEW cov: 12013 ft: 14540 corp: 34/203b lim: 10 exec/s: 37 rss: 71Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:30.767 [2024-07-13 19:55:18.324893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:07:30.767 [2024-07-13 19:55:18.324918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.767 [2024-07-13 19:55:18.324970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:30.767 [2024-07-13 19:55:18.324984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.767 [2024-07-13 19:55:18.325034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:30.767 [2024-07-13 19:55:18.325048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.767 #38 NEW cov: 12013 ft: 14555 corp: 35/210b lim: 10 exec/s: 38 rss: 71Mb L: 7/10 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:07:30.768 [2024-07-13 19:55:18.365148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000077f7 cdw11:00000000 00:07:30.768 [2024-07-13 19:55:18.365173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.768 [2024-07-13 19:55:18.365223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00009595 cdw11:00000000 00:07:30.768 [2024-07-13 19:55:18.365236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.768 [2024-07-13 19:55:18.365283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00009595 cdw11:00000000 00:07:30.768 [2024-07-13 19:55:18.365296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.768 [2024-07-13 19:55:18.365345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00009595 cdw11:00000000 00:07:30.768 [2024-07-13 19:55:18.365357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.768 #39 NEW cov: 12013 ft: 14610 corp: 36/218b lim: 10 exec/s: 39 rss: 71Mb L: 8/10 MS: 1 InsertRepeatedBytes- 00:07:30.768 [2024-07-13 19:55:18.415255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff0c cdw11:00000000 00:07:30.768 [2024-07-13 19:55:18.415283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.768 [2024-07-13 19:55:18.415335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff0c cdw11:00000000 00:07:30.768 [2024-07-13 19:55:18.415348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.768 [2024-07-13 19:55:18.415395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005d0c cdw11:00000000 00:07:30.768 [2024-07-13 19:55:18.415409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.768 [2024-07-13 19:55:18.415462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ce0c cdw11:00000000 00:07:30.768 [2024-07-13 19:55:18.415474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.044 #40 NEW cov: 12013 ft: 14623 corp: 37/226b lim: 10 exec/s: 40 rss: 71Mb L: 8/10 MS: 1 CrossOver- 00:07:31.044 [2024-07-13 19:55:18.465283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e9e9 cdw11:00000000 00:07:31.044 [2024-07-13 19:55:18.465307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.044 [2024-07-13 19:55:18.465357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:31.044 [2024-07-13 19:55:18.465370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.044 [2024-07-13 19:55:18.465417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000c24 cdw11:00000000 00:07:31.044 [2024-07-13 19:55:18.465450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.044 #41 NEW cov: 12013 ft: 14626 corp: 38/233b lim: 10 exec/s: 41 rss: 71Mb L: 7/10 MS: 1 ChangeByte- 00:07:31.044 [2024-07-13 19:55:18.505366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:31.044 [2024-07-13 19:55:18.505390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.044 [2024-07-13 19:55:18.505439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000c0c cdw11:00000000 00:07:31.044 [2024-07-13 19:55:18.505457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.044 [2024-07-13 19:55:18.505505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ce0c cdw11:00000000 00:07:31.044 [2024-07-13 19:55:18.505518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.044 #42 NEW cov: 12013 ft: 14632 corp: 39/239b lim: 10 exec/s: 21 rss: 71Mb L: 6/10 MS: 1 EraseBytes- 00:07:31.044 #42 DONE cov: 12013 ft: 14632 corp: 39/239b lim: 10 exec/s: 21 rss: 71Mb 00:07:31.044 ###### Recommended dictionary. ###### 00:07:31.044 "\001\000\000\000" # Uses: 1 00:07:31.044 ###### End of recommended dictionary. ###### 00:07:31.044 Done 42 runs in 2 second(s) 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4408 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:31.044 19:55:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:07:31.044 [2024-07-13 19:55:18.681973] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:31.044 [2024-07-13 19:55:18.682045] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3677057 ] 00:07:31.345 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.345 [2024-07-13 19:55:18.866253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.345 [2024-07-13 19:55:18.888080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.345 [2024-07-13 19:55:18.940269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.345 [2024-07-13 19:55:18.956593] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:31.345 INFO: Running with entropic power schedule (0xFF, 100). 00:07:31.345 INFO: Seed: 499120247 00:07:31.345 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:31.345 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:31.345 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:31.345 INFO: A corpus is not provided, starting from an empty corpus 00:07:31.345 [2024-07-13 19:55:19.004965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.345 [2024-07-13 19:55:19.004996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.603 #2 INITED cov: 11797 ft: 11798 corp: 1/1b exec/s: 0 rss: 67Mb 00:07:31.603 [2024-07-13 19:55:19.045139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-07-13 19:55:19.045165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.603 [2024-07-13 19:55:19.045223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-07-13 19:55:19.045236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.603 #3 NEW cov: 11927 ft: 13111 corp: 2/3b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 InsertByte- 00:07:31.603 [2024-07-13 19:55:19.095261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-07-13 19:55:19.095293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.603 [2024-07-13 19:55:19.095350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-07-13 19:55:19.095364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.603 #4 NEW cov: 11933 ft: 13237 corp: 3/5b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ShuffleBytes- 00:07:31.603 [2024-07-13 19:55:19.145751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-07-13 19:55:19.145778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.603 [2024-07-13 19:55:19.145836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-07-13 19:55:19.145851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.603 [2024-07-13 19:55:19.145907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-07-13 19:55:19.145921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.603 [2024-07-13 19:55:19.145978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-07-13 19:55:19.145992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.603 #5 NEW cov: 12018 ft: 13967 corp: 4/9b lim: 5 exec/s: 0 rss: 68Mb L: 4/4 MS: 1 CopyPart- 00:07:31.603 [2024-07-13 19:55:19.195363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-07-13 19:55:19.195388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.603 #6 NEW cov: 12018 ft: 14083 corp: 5/10b lim: 5 exec/s: 0 rss: 68Mb L: 1/4 MS: 1 ChangeBit- 00:07:31.603 [2024-07-13 19:55:19.236017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-07-13 19:55:19.236042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.603 [2024-07-13 19:55:19.236098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-07-13 19:55:19.236112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.603 [2024-07-13 19:55:19.236170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-07-13 19:55:19.236183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.603 [2024-07-13 19:55:19.236240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.603 [2024-07-13 19:55:19.236254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.862 #7 NEW cov: 12018 ft: 14145 corp: 6/14b lim: 5 exec/s: 0 rss: 68Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:31.862 [2024-07-13 19:55:19.286283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.862 [2024-07-13 19:55:19.286310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.862 [2024-07-13 19:55:19.286365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.862 [2024-07-13 19:55:19.286380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.862 [2024-07-13 19:55:19.286436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.862 [2024-07-13 19:55:19.286454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.862 [2024-07-13 19:55:19.286510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.862 [2024-07-13 19:55:19.286523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.862 [2024-07-13 19:55:19.286580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.862 [2024-07-13 19:55:19.286593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:31.862 #8 NEW cov: 12018 ft: 14293 corp: 7/19b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 InsertByte- 00:07:31.862 [2024-07-13 19:55:19.326352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.862 [2024-07-13 19:55:19.326377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.862 [2024-07-13 19:55:19.326435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.862 [2024-07-13 19:55:19.326453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.862 [2024-07-13 19:55:19.326508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.326521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.863 [2024-07-13 19:55:19.326577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.326590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.863 [2024-07-13 19:55:19.326645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.326659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:31.863 #9 NEW cov: 12018 ft: 14332 corp: 8/24b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 CrossOver- 00:07:31.863 [2024-07-13 19:55:19.376191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.376216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.863 [2024-07-13 19:55:19.376274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.376287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.863 [2024-07-13 19:55:19.376345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.376358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.863 #10 NEW cov: 12018 ft: 14531 corp: 9/27b lim: 5 exec/s: 0 rss: 69Mb L: 3/5 MS: 1 EraseBytes- 00:07:31.863 [2024-07-13 19:55:19.416471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.416496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.863 [2024-07-13 19:55:19.416551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.416565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.863 [2024-07-13 19:55:19.416634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.416648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.863 [2024-07-13 19:55:19.416703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.416717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.863 #11 NEW cov: 12018 ft: 14583 corp: 10/31b lim: 5 exec/s: 0 rss: 69Mb L: 4/5 MS: 1 ChangeBit- 00:07:31.863 [2024-07-13 19:55:19.466112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.466137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.863 #12 NEW cov: 12018 ft: 14639 corp: 11/32b lim: 5 exec/s: 0 rss: 69Mb L: 1/5 MS: 1 EraseBytes- 00:07:31.863 [2024-07-13 19:55:19.506890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.506915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.863 [2024-07-13 19:55:19.506974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.506987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.863 [2024-07-13 19:55:19.507044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.507057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.863 [2024-07-13 19:55:19.507114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.507127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.863 [2024-07-13 19:55:19.507187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.863 [2024-07-13 19:55:19.507200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.122 #13 NEW cov: 12018 ft: 14672 corp: 12/37b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 ChangeByte- 00:07:32.122 [2024-07-13 19:55:19.546717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.546742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.122 [2024-07-13 19:55:19.546800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.546813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.122 [2024-07-13 19:55:19.546869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.546883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.122 #14 NEW cov: 12018 ft: 14701 corp: 13/40b lim: 5 exec/s: 0 rss: 69Mb L: 3/5 MS: 1 ChangeByte- 00:07:32.122 [2024-07-13 19:55:19.596997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.597022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.122 [2024-07-13 19:55:19.597080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.597094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.122 [2024-07-13 19:55:19.597152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.597165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.122 [2024-07-13 19:55:19.597221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.597234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.122 #15 NEW cov: 12018 ft: 14716 corp: 14/44b lim: 5 exec/s: 0 rss: 69Mb L: 4/5 MS: 1 ChangeBit- 00:07:32.122 [2024-07-13 19:55:19.646971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.646996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.122 [2024-07-13 19:55:19.647054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.647068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.122 [2024-07-13 19:55:19.647126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.647143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.122 #16 NEW cov: 12018 ft: 14729 corp: 15/47b lim: 5 exec/s: 0 rss: 69Mb L: 3/5 MS: 1 EraseBytes- 00:07:32.122 [2024-07-13 19:55:19.696926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.696951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.122 [2024-07-13 19:55:19.697009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.697023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.122 #17 NEW cov: 12018 ft: 14755 corp: 16/49b lim: 5 exec/s: 0 rss: 69Mb L: 2/5 MS: 1 CopyPart- 00:07:32.122 [2024-07-13 19:55:19.737415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.737439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.122 [2024-07-13 19:55:19.737502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.737516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.122 [2024-07-13 19:55:19.737572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.737585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.122 [2024-07-13 19:55:19.737643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.737656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.122 #18 NEW cov: 12018 ft: 14794 corp: 17/53b lim: 5 exec/s: 0 rss: 69Mb L: 4/5 MS: 1 ChangeByte- 00:07:32.122 [2024-07-13 19:55:19.777497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.777522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.122 [2024-07-13 19:55:19.777579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.777594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.122 [2024-07-13 19:55:19.777651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.777665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.122 [2024-07-13 19:55:19.777722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.122 [2024-07-13 19:55:19.777735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.381 #19 NEW cov: 12018 ft: 14797 corp: 18/57b lim: 5 exec/s: 0 rss: 69Mb L: 4/5 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:32.381 [2024-07-13 19:55:19.827635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.381 [2024-07-13 19:55:19.827660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.381 [2024-07-13 19:55:19.827720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.381 [2024-07-13 19:55:19.827734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.381 [2024-07-13 19:55:19.827792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.381 [2024-07-13 19:55:19.827805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.381 [2024-07-13 19:55:19.827862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.381 [2024-07-13 19:55:19.827875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.381 #20 NEW cov: 12018 ft: 14819 corp: 19/61b lim: 5 exec/s: 0 rss: 69Mb L: 4/5 MS: 1 CopyPart- 00:07:32.381 [2024-07-13 19:55:19.867738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.381 [2024-07-13 19:55:19.867763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.381 [2024-07-13 19:55:19.867822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.381 [2024-07-13 19:55:19.867835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.381 [2024-07-13 19:55:19.867892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.381 [2024-07-13 19:55:19.867905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.381 [2024-07-13 19:55:19.867962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.381 [2024-07-13 19:55:19.867975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.381 #21 NEW cov: 12018 ft: 14851 corp: 20/65b lim: 5 exec/s: 0 rss: 69Mb L: 4/5 MS: 1 ShuffleBytes- 00:07:32.381 [2024-07-13 19:55:19.908030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.381 [2024-07-13 19:55:19.908054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.381 [2024-07-13 19:55:19.908114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.381 [2024-07-13 19:55:19.908128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.381 [2024-07-13 19:55:19.908182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.381 [2024-07-13 19:55:19.908196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.381 [2024-07-13 19:55:19.908251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.381 [2024-07-13 19:55:19.908267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.381 [2024-07-13 19:55:19.908323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.381 [2024-07-13 19:55:19.908336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.641 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:32.641 #22 NEW cov: 12041 ft: 14926 corp: 21/70b lim: 5 exec/s: 22 rss: 70Mb L: 5/5 MS: 1 CopyPart- 00:07:32.641 [2024-07-13 19:55:20.218427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.641 [2024-07-13 19:55:20.218466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.641 [2024-07-13 19:55:20.218542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.641 [2024-07-13 19:55:20.218557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.641 #23 NEW cov: 12041 ft: 14938 corp: 22/72b lim: 5 exec/s: 23 rss: 70Mb L: 2/5 MS: 1 EraseBytes- 00:07:32.641 [2024-07-13 19:55:20.268818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.641 [2024-07-13 19:55:20.268845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.641 [2024-07-13 19:55:20.268906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.641 [2024-07-13 19:55:20.268919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.641 [2024-07-13 19:55:20.268978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.641 [2024-07-13 19:55:20.268991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.641 [2024-07-13 19:55:20.269050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.641 [2024-07-13 19:55:20.269063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.641 #24 NEW cov: 12041 ft: 14972 corp: 23/76b lim: 5 exec/s: 24 rss: 70Mb L: 4/5 MS: 1 ChangeBinInt- 00:07:32.900 [2024-07-13 19:55:20.308912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.900 [2024-07-13 19:55:20.308939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.900 [2024-07-13 19:55:20.309001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.900 [2024-07-13 19:55:20.309016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.900 [2024-07-13 19:55:20.309076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.900 [2024-07-13 19:55:20.309093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.900 [2024-07-13 19:55:20.309149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.900 [2024-07-13 19:55:20.309164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.900 #25 NEW cov: 12041 ft: 14983 corp: 24/80b lim: 5 exec/s: 25 rss: 70Mb L: 4/5 MS: 1 CopyPart- 00:07:32.900 [2024-07-13 19:55:20.349026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.900 [2024-07-13 19:55:20.349052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.900 [2024-07-13 19:55:20.349111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.900 [2024-07-13 19:55:20.349124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.900 [2024-07-13 19:55:20.349185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.900 [2024-07-13 19:55:20.349198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.900 [2024-07-13 19:55:20.349257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.900 [2024-07-13 19:55:20.349271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.900 #26 NEW cov: 12041 ft: 15002 corp: 25/84b lim: 5 exec/s: 26 rss: 70Mb L: 4/5 MS: 1 CMP- DE: "\007\000"- 00:07:32.901 [2024-07-13 19:55:20.389180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.901 [2024-07-13 19:55:20.389207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.901 [2024-07-13 19:55:20.389267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.901 [2024-07-13 19:55:20.389281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.901 [2024-07-13 19:55:20.389341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.901 [2024-07-13 19:55:20.389355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.901 [2024-07-13 19:55:20.389415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.901 [2024-07-13 19:55:20.389428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.901 #27 NEW cov: 12041 ft: 15018 corp: 26/88b lim: 5 exec/s: 27 rss: 70Mb L: 4/5 MS: 1 CopyPart- 00:07:32.901 [2024-07-13 19:55:20.439257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.901 [2024-07-13 19:55:20.439284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.901 [2024-07-13 19:55:20.439343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.901 [2024-07-13 19:55:20.439360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.901 [2024-07-13 19:55:20.439418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.901 [2024-07-13 19:55:20.439431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.901 [2024-07-13 19:55:20.439490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.901 [2024-07-13 19:55:20.439503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.901 #28 NEW cov: 12041 ft: 15029 corp: 27/92b lim: 5 exec/s: 28 rss: 70Mb L: 4/5 MS: 1 ChangeByte- 00:07:32.901 [2024-07-13 19:55:20.489091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.901 [2024-07-13 19:55:20.489117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.901 [2024-07-13 19:55:20.489194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.901 [2024-07-13 19:55:20.489208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.901 #29 NEW cov: 12041 ft: 15034 corp: 28/94b lim: 5 exec/s: 29 rss: 70Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:32.901 [2024-07-13 19:55:20.539242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.901 [2024-07-13 19:55:20.539267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.901 [2024-07-13 19:55:20.539328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.901 [2024-07-13 19:55:20.539342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.160 #30 NEW cov: 12041 ft: 15037 corp: 29/96b lim: 5 exec/s: 30 rss: 70Mb L: 2/5 MS: 1 ChangeBit- 00:07:33.160 [2024-07-13 19:55:20.589881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.589906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.589967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.589981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.590039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.590052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.590111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.590124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.590183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.590199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.160 #31 NEW cov: 12041 ft: 15069 corp: 30/101b lim: 5 exec/s: 31 rss: 70Mb L: 5/5 MS: 1 CopyPart- 00:07:33.160 [2024-07-13 19:55:20.629685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.629712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.629788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.629802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.629861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.629874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.160 #32 NEW cov: 12041 ft: 15082 corp: 31/104b lim: 5 exec/s: 32 rss: 70Mb L: 3/5 MS: 1 InsertByte- 00:07:33.160 [2024-07-13 19:55:20.669975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.670000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.670060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.670074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.670133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.670147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.670203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.670216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.160 #33 NEW cov: 12041 ft: 15092 corp: 32/108b lim: 5 exec/s: 33 rss: 70Mb L: 4/5 MS: 1 ChangeByte- 00:07:33.160 [2024-07-13 19:55:20.710214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.710239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.710299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.710312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.710373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.710386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.710448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.710462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.710522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.710534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.160 #34 NEW cov: 12041 ft: 15104 corp: 33/113b lim: 5 exec/s: 34 rss: 70Mb L: 5/5 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:07:33.160 [2024-07-13 19:55:20.759877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.759902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.759963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.759976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.160 #35 NEW cov: 12041 ft: 15116 corp: 34/115b lim: 5 exec/s: 35 rss: 71Mb L: 2/5 MS: 1 CopyPart- 00:07:33.160 [2024-07-13 19:55:20.810457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.810482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.810540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.810554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.810611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.810625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.810682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.810696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.160 [2024-07-13 19:55:20.810752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.160 [2024-07-13 19:55:20.810766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:33.420 #36 NEW cov: 12041 ft: 15134 corp: 35/120b lim: 5 exec/s: 36 rss: 71Mb L: 5/5 MS: 1 ChangeBit- 00:07:33.420 [2024-07-13 19:55:20.860504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.420 [2024-07-13 19:55:20.860528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.420 [2024-07-13 19:55:20.860592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.420 [2024-07-13 19:55:20.860606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.420 [2024-07-13 19:55:20.860670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.420 [2024-07-13 19:55:20.860683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.420 [2024-07-13 19:55:20.860741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.420 [2024-07-13 19:55:20.860755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.420 #37 NEW cov: 12041 ft: 15155 corp: 36/124b lim: 5 exec/s: 37 rss: 71Mb L: 4/5 MS: 1 ChangeByte- 00:07:33.420 [2024-07-13 19:55:20.900619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.420 [2024-07-13 19:55:20.900644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.420 [2024-07-13 19:55:20.900704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.420 [2024-07-13 19:55:20.900717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.420 [2024-07-13 19:55:20.900776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.420 [2024-07-13 19:55:20.900790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.420 [2024-07-13 19:55:20.900847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.420 [2024-07-13 19:55:20.900860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.420 #38 NEW cov: 12041 ft: 15170 corp: 37/128b lim: 5 exec/s: 38 rss: 71Mb L: 4/5 MS: 1 ChangeBinInt- 00:07:33.420 [2024-07-13 19:55:20.950228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.420 [2024-07-13 19:55:20.950252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.420 #39 NEW cov: 12041 ft: 15191 corp: 38/129b lim: 5 exec/s: 39 rss: 71Mb L: 1/5 MS: 1 CopyPart- 00:07:33.420 [2024-07-13 19:55:20.990888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.420 [2024-07-13 19:55:20.990912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.420 [2024-07-13 19:55:20.990968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.420 [2024-07-13 19:55:20.990982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.420 [2024-07-13 19:55:20.991038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.420 [2024-07-13 19:55:20.991051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.420 [2024-07-13 19:55:20.991107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.420 [2024-07-13 19:55:20.991124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:33.420 #40 NEW cov: 12041 ft: 15252 corp: 39/133b lim: 5 exec/s: 20 rss: 71Mb L: 4/5 MS: 1 ChangeByte- 00:07:33.420 #40 DONE cov: 12041 ft: 15252 corp: 39/133b lim: 5 exec/s: 20 rss: 71Mb 00:07:33.420 ###### Recommended dictionary. ###### 00:07:33.420 "\000\000\000\000" # Uses: 1 00:07:33.420 "\007\000" # Uses: 0 00:07:33.420 ###### End of recommended dictionary. ###### 00:07:33.420 Done 40 runs in 2 second(s) 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4409 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:33.679 19:55:21 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:07:33.679 [2024-07-13 19:55:21.168733] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:33.679 [2024-07-13 19:55:21.168801] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3677557 ] 00:07:33.679 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.938 [2024-07-13 19:55:21.341404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.938 [2024-07-13 19:55:21.363134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.938 [2024-07-13 19:55:21.415329] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.938 [2024-07-13 19:55:21.431651] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:33.938 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.938 INFO: Seed: 2976124359 00:07:33.938 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:33.938 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:33.938 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:33.938 INFO: A corpus is not provided, starting from an empty corpus 00:07:33.938 [2024-07-13 19:55:21.486909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.938 [2024-07-13 19:55:21.486941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.938 #2 INITED cov: 11797 ft: 11796 corp: 1/1b exec/s: 0 rss: 67Mb 00:07:33.938 [2024-07-13 19:55:21.526931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.938 [2024-07-13 19:55:21.526956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.938 #3 NEW cov: 11927 ft: 12383 corp: 2/2b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 ChangeBit- 00:07:33.938 [2024-07-13 19:55:21.577238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.938 [2024-07-13 19:55:21.577262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.938 [2024-07-13 19:55:21.577317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.938 [2024-07-13 19:55:21.577331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.197 #4 NEW cov: 11933 ft: 13137 corp: 3/4b lim: 5 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 CopyPart- 00:07:34.197 [2024-07-13 19:55:21.617185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.197 [2024-07-13 19:55:21.617210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.197 #5 NEW cov: 12018 ft: 13565 corp: 4/5b lim: 5 exec/s: 0 rss: 69Mb L: 1/2 MS: 1 ChangeBit- 00:07:34.197 [2024-07-13 19:55:21.657453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.197 [2024-07-13 19:55:21.657477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.197 [2024-07-13 19:55:21.657533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.197 [2024-07-13 19:55:21.657546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.197 #6 NEW cov: 12018 ft: 13630 corp: 5/7b lim: 5 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 InsertByte- 00:07:34.197 [2024-07-13 19:55:21.697405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.197 [2024-07-13 19:55:21.697430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.197 #7 NEW cov: 12018 ft: 13685 corp: 6/8b lim: 5 exec/s: 0 rss: 69Mb L: 1/2 MS: 1 ChangeBinInt- 00:07:34.197 [2024-07-13 19:55:21.737538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.197 [2024-07-13 19:55:21.737562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.197 #8 NEW cov: 12018 ft: 13790 corp: 7/9b lim: 5 exec/s: 0 rss: 69Mb L: 1/2 MS: 1 ShuffleBytes- 00:07:34.197 [2024-07-13 19:55:21.787841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.197 [2024-07-13 19:55:21.787865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.197 [2024-07-13 19:55:21.787921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.197 [2024-07-13 19:55:21.787938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.197 #9 NEW cov: 12018 ft: 13880 corp: 8/11b lim: 5 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 CrossOver- 00:07:34.197 [2024-07-13 19:55:21.837970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.197 [2024-07-13 19:55:21.837994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.197 [2024-07-13 19:55:21.838050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.197 [2024-07-13 19:55:21.838063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.456 #10 NEW cov: 12018 ft: 13938 corp: 9/13b lim: 5 exec/s: 0 rss: 69Mb L: 2/2 MS: 1 InsertByte- 00:07:34.456 [2024-07-13 19:55:21.888258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.456 [2024-07-13 19:55:21.888283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.456 [2024-07-13 19:55:21.888340] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.456 [2024-07-13 19:55:21.888353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.456 [2024-07-13 19:55:21.888408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.456 [2024-07-13 19:55:21.888421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.456 #11 NEW cov: 12018 ft: 14146 corp: 10/16b lim: 5 exec/s: 0 rss: 69Mb L: 3/3 MS: 1 InsertByte- 00:07:34.456 [2024-07-13 19:55:21.938207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.456 [2024-07-13 19:55:21.938232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.456 [2024-07-13 19:55:21.938287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.456 [2024-07-13 19:55:21.938301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.456 #12 NEW cov: 12018 ft: 14165 corp: 11/18b lim: 5 exec/s: 0 rss: 69Mb L: 2/3 MS: 1 CrossOver- 00:07:34.456 [2024-07-13 19:55:21.978472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.456 [2024-07-13 19:55:21.978498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.456 [2024-07-13 19:55:21.978554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.456 [2024-07-13 19:55:21.978567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.456 [2024-07-13 19:55:21.978637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.456 [2024-07-13 19:55:21.978651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.456 #13 NEW cov: 12018 ft: 14194 corp: 12/21b lim: 5 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 ShuffleBytes- 00:07:34.456 [2024-07-13 19:55:22.028633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.456 [2024-07-13 19:55:22.028658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.456 [2024-07-13 19:55:22.028714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.456 [2024-07-13 19:55:22.028727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.456 [2024-07-13 19:55:22.028783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.456 [2024-07-13 19:55:22.028796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.456 #14 NEW cov: 12018 ft: 14220 corp: 13/24b lim: 5 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 CopyPart- 00:07:34.456 [2024-07-13 19:55:22.078639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.456 [2024-07-13 19:55:22.078663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.456 [2024-07-13 19:55:22.078718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.456 [2024-07-13 19:55:22.078731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.456 #15 NEW cov: 12018 ft: 14251 corp: 14/26b lim: 5 exec/s: 0 rss: 70Mb L: 2/3 MS: 1 EraseBytes- 00:07:34.715 [2024-07-13 19:55:22.128928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.715 [2024-07-13 19:55:22.128953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.715 [2024-07-13 19:55:22.129009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.715 [2024-07-13 19:55:22.129023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.715 [2024-07-13 19:55:22.129079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.715 [2024-07-13 19:55:22.129092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.715 #16 NEW cov: 12018 ft: 14255 corp: 15/29b lim: 5 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 CrossOver- 00:07:34.715 [2024-07-13 19:55:22.178889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.715 [2024-07-13 19:55:22.178914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.715 [2024-07-13 19:55:22.178970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.715 [2024-07-13 19:55:22.178984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.715 #17 NEW cov: 12018 ft: 14283 corp: 16/31b lim: 5 exec/s: 0 rss: 70Mb L: 2/3 MS: 1 EraseBytes- 00:07:34.715 [2024-07-13 19:55:22.219171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.715 [2024-07-13 19:55:22.219195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.715 [2024-07-13 19:55:22.219251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.715 [2024-07-13 19:55:22.219264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.715 [2024-07-13 19:55:22.219320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.715 [2024-07-13 19:55:22.219333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.715 #18 NEW cov: 12018 ft: 14297 corp: 17/34b lim: 5 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 ChangeByte- 00:07:34.715 [2024-07-13 19:55:22.268985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.715 [2024-07-13 19:55:22.269009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.715 #19 NEW cov: 12018 ft: 14305 corp: 18/35b lim: 5 exec/s: 0 rss: 70Mb L: 1/3 MS: 1 ShuffleBytes- 00:07:34.715 [2024-07-13 19:55:22.309402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.715 [2024-07-13 19:55:22.309426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.715 [2024-07-13 19:55:22.309484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.715 [2024-07-13 19:55:22.309497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.715 [2024-07-13 19:55:22.309551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.715 [2024-07-13 19:55:22.309564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.715 #20 NEW cov: 12018 ft: 14316 corp: 19/38b lim: 5 exec/s: 0 rss: 70Mb L: 3/3 MS: 1 CrossOver- 00:07:34.715 [2024-07-13 19:55:22.359227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.715 [2024-07-13 19:55:22.359252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.232 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:35.232 #21 NEW cov: 12041 ft: 14343 corp: 20/39b lim: 5 exec/s: 21 rss: 70Mb L: 1/3 MS: 1 CrossOver- 00:07:35.232 [2024-07-13 19:55:22.690971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.232 [2024-07-13 19:55:22.691028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.232 [2024-07-13 19:55:22.691108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.232 [2024-07-13 19:55:22.691135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.232 [2024-07-13 19:55:22.691210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.232 [2024-07-13 19:55:22.691241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.232 [2024-07-13 19:55:22.691318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.232 [2024-07-13 19:55:22.691343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.232 [2024-07-13 19:55:22.691420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.232 [2024-07-13 19:55:22.691451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.232 #22 NEW cov: 12041 ft: 14847 corp: 21/44b lim: 5 exec/s: 22 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:35.232 [2024-07-13 19:55:22.750201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.232 [2024-07-13 19:55:22.750226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.232 #23 NEW cov: 12041 ft: 14873 corp: 22/45b lim: 5 exec/s: 23 rss: 71Mb L: 1/5 MS: 1 EraseBytes- 00:07:35.232 [2024-07-13 19:55:22.790605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.232 [2024-07-13 19:55:22.790629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.232 [2024-07-13 19:55:22.790701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.233 [2024-07-13 19:55:22.790715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.233 [2024-07-13 19:55:22.790769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.233 [2024-07-13 19:55:22.790782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.233 #24 NEW cov: 12041 ft: 14947 corp: 23/48b lim: 5 exec/s: 24 rss: 71Mb L: 3/5 MS: 1 ChangeByte- 00:07:35.233 [2024-07-13 19:55:22.830432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.233 [2024-07-13 19:55:22.830462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.233 #25 NEW cov: 12041 ft: 14964 corp: 24/49b lim: 5 exec/s: 25 rss: 71Mb L: 1/5 MS: 1 ChangeBit- 00:07:35.233 [2024-07-13 19:55:22.860958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.233 [2024-07-13 19:55:22.860982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.233 [2024-07-13 19:55:22.861053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.233 [2024-07-13 19:55:22.861066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.233 [2024-07-13 19:55:22.861120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.233 [2024-07-13 19:55:22.861133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.233 [2024-07-13 19:55:22.861190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.233 [2024-07-13 19:55:22.861204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.490 #26 NEW cov: 12041 ft: 15011 corp: 25/53b lim: 5 exec/s: 26 rss: 71Mb L: 4/5 MS: 1 CopyPart- 00:07:35.490 [2024-07-13 19:55:22.911246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:22.911269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.490 [2024-07-13 19:55:22.911324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:22.911338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.490 [2024-07-13 19:55:22.911390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:22.911403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.490 [2024-07-13 19:55:22.911456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:22.911468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.490 [2024-07-13 19:55:22.911520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:22.911533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.490 #27 NEW cov: 12041 ft: 15021 corp: 26/58b lim: 5 exec/s: 27 rss: 71Mb L: 5/5 MS: 1 CMP- DE: "\000\011"- 00:07:35.490 [2024-07-13 19:55:22.951067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:22.951091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.490 [2024-07-13 19:55:22.951143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:22.951156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.490 [2024-07-13 19:55:22.951210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:22.951239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.490 #28 NEW cov: 12041 ft: 15056 corp: 27/61b lim: 5 exec/s: 28 rss: 71Mb L: 3/5 MS: 1 ShuffleBytes- 00:07:35.490 [2024-07-13 19:55:22.990868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:22.990893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.490 #29 NEW cov: 12041 ft: 15143 corp: 28/62b lim: 5 exec/s: 29 rss: 71Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:35.490 [2024-07-13 19:55:23.031596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:23.031625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.490 [2024-07-13 19:55:23.031679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:23.031692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.490 [2024-07-13 19:55:23.031744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:23.031757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.490 [2024-07-13 19:55:23.031809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:23.031822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.490 [2024-07-13 19:55:23.031872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:23.031885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.490 #30 NEW cov: 12041 ft: 15150 corp: 29/67b lim: 5 exec/s: 30 rss: 71Mb L: 5/5 MS: 1 CrossOver- 00:07:35.490 [2024-07-13 19:55:23.071088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:23.071112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.490 #31 NEW cov: 12041 ft: 15155 corp: 30/68b lim: 5 exec/s: 31 rss: 71Mb L: 1/5 MS: 1 ChangeBit- 00:07:35.490 [2024-07-13 19:55:23.111830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:23.111854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.490 [2024-07-13 19:55:23.111910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:23.111923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.490 [2024-07-13 19:55:23.111976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:23.111990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.490 [2024-07-13 19:55:23.112041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:23.112054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.490 [2024-07-13 19:55:23.112108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.490 [2024-07-13 19:55:23.112121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:35.490 #32 NEW cov: 12041 ft: 15200 corp: 31/73b lim: 5 exec/s: 32 rss: 71Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:35.749 [2024-07-13 19:55:23.161484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.161509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.749 [2024-07-13 19:55:23.161578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.161592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.749 #33 NEW cov: 12041 ft: 15220 corp: 32/75b lim: 5 exec/s: 33 rss: 71Mb L: 2/5 MS: 1 InsertByte- 00:07:35.749 [2024-07-13 19:55:23.201588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.201613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.749 [2024-07-13 19:55:23.201666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.201679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.749 #34 NEW cov: 12041 ft: 15222 corp: 33/77b lim: 5 exec/s: 34 rss: 71Mb L: 2/5 MS: 1 InsertByte- 00:07:35.749 [2024-07-13 19:55:23.242005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.242030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.749 [2024-07-13 19:55:23.242084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.242097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.749 [2024-07-13 19:55:23.242152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.242166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.749 [2024-07-13 19:55:23.242219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.242233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.749 #35 NEW cov: 12041 ft: 15228 corp: 34/81b lim: 5 exec/s: 35 rss: 71Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:07:35.749 [2024-07-13 19:55:23.282138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.282163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.749 [2024-07-13 19:55:23.282215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.282228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.749 [2024-07-13 19:55:23.282280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.282294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.749 [2024-07-13 19:55:23.282347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.282360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.749 #36 NEW cov: 12041 ft: 15290 corp: 35/85b lim: 5 exec/s: 36 rss: 71Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:07:35.749 [2024-07-13 19:55:23.332100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.332124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.749 [2024-07-13 19:55:23.332178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.332192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.749 [2024-07-13 19:55:23.332244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.332257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.749 #37 NEW cov: 12041 ft: 15291 corp: 36/88b lim: 5 exec/s: 37 rss: 71Mb L: 3/5 MS: 1 CrossOver- 00:07:35.749 [2024-07-13 19:55:23.372029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.372055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.749 [2024-07-13 19:55:23.372105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.749 [2024-07-13 19:55:23.372118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.749 #38 NEW cov: 12041 ft: 15303 corp: 37/90b lim: 5 exec/s: 38 rss: 71Mb L: 2/5 MS: 1 EraseBytes- 00:07:36.008 [2024-07-13 19:55:23.412040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.008 [2024-07-13 19:55:23.412065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.008 #39 NEW cov: 12041 ft: 15343 corp: 38/91b lim: 5 exec/s: 39 rss: 71Mb L: 1/5 MS: 1 CopyPart- 00:07:36.008 [2024-07-13 19:55:23.442699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.008 [2024-07-13 19:55:23.442723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.008 [2024-07-13 19:55:23.442775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.008 [2024-07-13 19:55:23.442788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.008 [2024-07-13 19:55:23.442838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.008 [2024-07-13 19:55:23.442852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.008 [2024-07-13 19:55:23.442903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.008 [2024-07-13 19:55:23.442918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.008 [2024-07-13 19:55:23.442968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:36.008 [2024-07-13 19:55:23.442981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.008 #40 NEW cov: 12041 ft: 15359 corp: 39/96b lim: 5 exec/s: 20 rss: 71Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:36.008 #40 DONE cov: 12041 ft: 15359 corp: 39/96b lim: 5 exec/s: 20 rss: 71Mb 00:07:36.008 ###### Recommended dictionary. ###### 00:07:36.008 "\000\011" # Uses: 0 00:07:36.008 ###### End of recommended dictionary. ###### 00:07:36.008 Done 40 runs in 2 second(s) 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4410 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:36.008 19:55:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:07:36.008 [2024-07-13 19:55:23.633436] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:36.008 [2024-07-13 19:55:23.633522] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3678035 ] 00:07:36.008 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.267 [2024-07-13 19:55:23.884608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.267 [2024-07-13 19:55:23.915641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.526 [2024-07-13 19:55:23.967933] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.526 [2024-07-13 19:55:23.984234] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:36.526 INFO: Running with entropic power schedule (0xFF, 100). 00:07:36.526 INFO: Seed: 1233147649 00:07:36.526 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:36.526 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:36.526 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:36.526 INFO: A corpus is not provided, starting from an empty corpus 00:07:36.526 #2 INITED exec/s: 0 rss: 62Mb 00:07:36.526 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:36.526 This may also happen if the target rejected all inputs we tried so far 00:07:36.526 [2024-07-13 19:55:24.029803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.526 [2024-07-13 19:55:24.029831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.526 [2024-07-13 19:55:24.029890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.526 [2024-07-13 19:55:24.029904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.526 [2024-07-13 19:55:24.029957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.526 [2024-07-13 19:55:24.029971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.526 [2024-07-13 19:55:24.030025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.526 [2024-07-13 19:55:24.030038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.785 NEW_FUNC[1/691]: 0x4a0820 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:36.786 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:36.786 #9 NEW cov: 11803 ft: 11819 corp: 2/36b lim: 40 exec/s: 0 rss: 69Mb L: 35/35 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:36.786 [2024-07-13 19:55:24.340158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.786 [2024-07-13 19:55:24.340193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.786 #14 NEW cov: 11950 ft: 12973 corp: 3/46b lim: 40 exec/s: 0 rss: 70Mb L: 10/35 MS: 5 CrossOver-InsertByte-CrossOver-CrossOver-CrossOver- 00:07:36.786 [2024-07-13 19:55:24.380532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.786 [2024-07-13 19:55:24.380560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.786 [2024-07-13 19:55:24.380617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00dc0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.786 [2024-07-13 19:55:24.380631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.786 [2024-07-13 19:55:24.380687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.786 [2024-07-13 19:55:24.380701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.786 [2024-07-13 19:55:24.380757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.786 [2024-07-13 19:55:24.380770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.786 #15 NEW cov: 11956 ft: 13193 corp: 4/81b lim: 40 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 ChangeByte- 00:07:36.786 [2024-07-13 19:55:24.430771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.786 [2024-07-13 19:55:24.430796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.786 [2024-07-13 19:55:24.430855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00dcffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.786 [2024-07-13 19:55:24.430869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.786 [2024-07-13 19:55:24.430925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.786 [2024-07-13 19:55:24.430938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.786 [2024-07-13 19:55:24.430993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.786 [2024-07-13 19:55:24.431006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.786 [2024-07-13 19:55:24.431064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000660a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.786 [2024-07-13 19:55:24.431077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.046 #21 NEW cov: 12041 ft: 13484 corp: 5/121b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:37.046 [2024-07-13 19:55:24.480458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:40000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.046 [2024-07-13 19:55:24.480483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.046 #24 NEW cov: 12041 ft: 13560 corp: 6/129b lim: 40 exec/s: 0 rss: 70Mb L: 8/40 MS: 3 EraseBytes-ChangeBinInt-InsertByte- 00:07:37.046 [2024-07-13 19:55:24.530604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.046 [2024-07-13 19:55:24.530629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.046 #25 NEW cov: 12041 ft: 13680 corp: 7/140b lim: 40 exec/s: 0 rss: 70Mb L: 11/40 MS: 1 CrossOver- 00:07:37.046 [2024-07-13 19:55:24.571059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.046 [2024-07-13 19:55:24.571084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.046 [2024-07-13 19:55:24.571140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00dc0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.046 [2024-07-13 19:55:24.571154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.046 [2024-07-13 19:55:24.571211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.046 [2024-07-13 19:55:24.571225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.046 [2024-07-13 19:55:24.571279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.046 [2024-07-13 19:55:24.571295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.046 #26 NEW cov: 12041 ft: 13722 corp: 8/175b lim: 40 exec/s: 0 rss: 70Mb L: 35/40 MS: 1 ChangeBinInt- 00:07:37.046 [2024-07-13 19:55:24.611281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.046 [2024-07-13 19:55:24.611306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.046 [2024-07-13 19:55:24.611379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffdc00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.046 [2024-07-13 19:55:24.611393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.046 [2024-07-13 19:55:24.611453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.046 [2024-07-13 19:55:24.611467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.046 [2024-07-13 19:55:24.611524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.046 [2024-07-13 19:55:24.611537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.046 [2024-07-13 19:55:24.611595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000660a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.046 [2024-07-13 19:55:24.611609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.046 #27 NEW cov: 12041 ft: 13771 corp: 9/215b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:37.046 [2024-07-13 19:55:24.660970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:40000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.046 [2024-07-13 19:55:24.660995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.046 #28 NEW cov: 12041 ft: 13868 corp: 10/224b lim: 40 exec/s: 0 rss: 70Mb L: 9/40 MS: 1 InsertByte- 00:07:37.305 [2024-07-13 19:55:24.711118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:05000000 cdw11:40000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.305 [2024-07-13 19:55:24.711144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.305 #29 NEW cov: 12041 ft: 13895 corp: 11/232b lim: 40 exec/s: 0 rss: 70Mb L: 8/40 MS: 1 ChangeBinInt- 00:07:37.305 [2024-07-13 19:55:24.751200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000040 cdw11:00070000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.305 [2024-07-13 19:55:24.751225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.305 #30 NEW cov: 12041 ft: 13933 corp: 12/247b lim: 40 exec/s: 0 rss: 70Mb L: 15/40 MS: 1 CopyPart- 00:07:37.305 [2024-07-13 19:55:24.791844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.305 [2024-07-13 19:55:24.791870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.305 [2024-07-13 19:55:24.791930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00dcffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.305 [2024-07-13 19:55:24.791947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.305 [2024-07-13 19:55:24.792004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.305 [2024-07-13 19:55:24.792018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.305 [2024-07-13 19:55:24.792076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.305 [2024-07-13 19:55:24.792089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.305 [2024-07-13 19:55:24.792147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000660a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.305 [2024-07-13 19:55:24.792160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.305 #31 NEW cov: 12041 ft: 13985 corp: 13/287b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:37.305 [2024-07-13 19:55:24.841884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.305 [2024-07-13 19:55:24.841910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.305 [2024-07-13 19:55:24.841969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00dcffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.305 [2024-07-13 19:55:24.841984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.305 [2024-07-13 19:55:24.842039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.305 [2024-07-13 19:55:24.842053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.306 [2024-07-13 19:55:24.842107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.306 [2024-07-13 19:55:24.842121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.306 #32 NEW cov: 12041 ft: 14061 corp: 14/326b lim: 40 exec/s: 0 rss: 70Mb L: 39/40 MS: 1 EraseBytes- 00:07:37.306 [2024-07-13 19:55:24.881933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.306 [2024-07-13 19:55:24.881958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.306 [2024-07-13 19:55:24.882031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.306 [2024-07-13 19:55:24.882045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.306 [2024-07-13 19:55:24.882101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.306 [2024-07-13 19:55:24.882115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.306 [2024-07-13 19:55:24.882172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.306 [2024-07-13 19:55:24.882189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.306 #33 NEW cov: 12041 ft: 14089 corp: 15/361b lim: 40 exec/s: 0 rss: 70Mb L: 35/40 MS: 1 CopyPart- 00:07:37.306 [2024-07-13 19:55:24.922180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.306 [2024-07-13 19:55:24.922205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.306 [2024-07-13 19:55:24.922278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffdc00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.306 [2024-07-13 19:55:24.922292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.306 [2024-07-13 19:55:24.922351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.306 [2024-07-13 19:55:24.922364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.306 [2024-07-13 19:55:24.922421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.306 [2024-07-13 19:55:24.922434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.306 [2024-07-13 19:55:24.922493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000660a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.306 [2024-07-13 19:55:24.922507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.306 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:37.306 #34 NEW cov: 12064 ft: 14134 corp: 16/401b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 CrossOver- 00:07:37.565 [2024-07-13 19:55:24.972306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-07-13 19:55:24.972331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.565 [2024-07-13 19:55:24.972390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffdc00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-07-13 19:55:24.972403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.565 [2024-07-13 19:55:24.972462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-07-13 19:55:24.972491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.565 [2024-07-13 19:55:24.972548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-07-13 19:55:24.972561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.565 [2024-07-13 19:55:24.972615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000660a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-07-13 19:55:24.972629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.565 #35 NEW cov: 12064 ft: 14144 corp: 17/441b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:37.565 [2024-07-13 19:55:25.011928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000040 cdw11:00070000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-07-13 19:55:25.011954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.565 #36 NEW cov: 12064 ft: 14169 corp: 18/453b lim: 40 exec/s: 36 rss: 70Mb L: 12/40 MS: 1 EraseBytes- 00:07:37.565 [2024-07-13 19:55:25.062064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000040 cdw11:00070000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-07-13 19:55:25.062090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.565 #37 NEW cov: 12064 ft: 14188 corp: 19/468b lim: 40 exec/s: 37 rss: 70Mb L: 15/40 MS: 1 CopyPart- 00:07:37.565 [2024-07-13 19:55:25.102343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffdc cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-07-13 19:55:25.102368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.565 [2024-07-13 19:55:25.102446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-07-13 19:55:25.102460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.565 #38 NEW cov: 12064 ft: 14395 corp: 20/490b lim: 40 exec/s: 38 rss: 70Mb L: 22/40 MS: 1 CrossOver- 00:07:37.565 [2024-07-13 19:55:25.142789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-07-13 19:55:25.142814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.565 [2024-07-13 19:55:25.142873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffdc02 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-07-13 19:55:25.142887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.565 [2024-07-13 19:55:25.142957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-07-13 19:55:25.142970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.565 [2024-07-13 19:55:25.143029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-07-13 19:55:25.143042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.565 [2024-07-13 19:55:25.143099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000660a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-07-13 19:55:25.143113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:37.565 #39 NEW cov: 12064 ft: 14410 corp: 21/530b lim: 40 exec/s: 39 rss: 70Mb L: 40/40 MS: 1 ChangeBit- 00:07:37.565 [2024-07-13 19:55:25.182453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000040 cdw11:00070000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.565 [2024-07-13 19:55:25.182477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.565 #40 NEW cov: 12064 ft: 14421 corp: 22/542b lim: 40 exec/s: 40 rss: 70Mb L: 12/40 MS: 1 CopyPart- 00:07:37.825 [2024-07-13 19:55:25.232591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00f8ff3f cdw11:00070000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-13 19:55:25.232619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.825 #41 NEW cov: 12064 ft: 14438 corp: 23/557b lim: 40 exec/s: 41 rss: 70Mb L: 15/40 MS: 1 ChangeBinInt- 00:07:37.825 [2024-07-13 19:55:25.272770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffdc cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-13 19:55:25.272794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.825 [2024-07-13 19:55:25.272852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-13 19:55:25.272866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.825 #42 NEW cov: 12064 ft: 14494 corp: 24/580b lim: 40 exec/s: 42 rss: 70Mb L: 23/40 MS: 1 CrossOver- 00:07:37.825 [2024-07-13 19:55:25.322812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00f8ff3f cdw11:31070000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-13 19:55:25.322836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.825 #43 NEW cov: 12064 ft: 14545 corp: 25/595b lim: 40 exec/s: 43 rss: 71Mb L: 15/40 MS: 1 ChangeByte- 00:07:37.825 [2024-07-13 19:55:25.373305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-13 19:55:25.373329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.825 [2024-07-13 19:55:25.373387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00dcffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-13 19:55:25.373401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.825 [2024-07-13 19:55:25.373461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-13 19:55:25.373475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.825 [2024-07-13 19:55:25.373531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000dc00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-13 19:55:25.373543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.825 #44 NEW cov: 12064 ft: 14583 corp: 26/634b lim: 40 exec/s: 44 rss: 71Mb L: 39/40 MS: 1 CopyPart- 00:07:37.825 [2024-07-13 19:55:25.423476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-13 19:55:25.423501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.825 [2024-07-13 19:55:25.423582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-13 19:55:25.423595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.825 [2024-07-13 19:55:25.423650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-13 19:55:25.423663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.825 [2024-07-13 19:55:25.423722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-13 19:55:25.423735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.825 #45 NEW cov: 12064 ft: 14654 corp: 27/671b lim: 40 exec/s: 45 rss: 71Mb L: 37/40 MS: 1 CopyPart- 00:07:37.825 [2024-07-13 19:55:25.463318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-13 19:55:25.463343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.825 [2024-07-13 19:55:25.463400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.825 [2024-07-13 19:55:25.463413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.085 #46 NEW cov: 12064 ft: 14657 corp: 28/694b lim: 40 exec/s: 46 rss: 71Mb L: 23/40 MS: 1 EraseBytes- 00:07:38.085 [2024-07-13 19:55:25.513338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000040 cdw11:00070000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.513362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.085 #47 NEW cov: 12064 ft: 14683 corp: 29/707b lim: 40 exec/s: 47 rss: 71Mb L: 13/40 MS: 1 InsertByte- 00:07:38.085 [2024-07-13 19:55:25.563520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:10000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.563544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.085 #48 NEW cov: 12064 ft: 14697 corp: 30/717b lim: 40 exec/s: 48 rss: 71Mb L: 10/40 MS: 1 ChangeBit- 00:07:38.085 [2024-07-13 19:55:25.604086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.604111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.085 [2024-07-13 19:55:25.604169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffdc02 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.604183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.085 [2024-07-13 19:55:25.604240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000600 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.604253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.085 [2024-07-13 19:55:25.604309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.604322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.085 [2024-07-13 19:55:25.604378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000660a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.604392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.085 #49 NEW cov: 12064 ft: 14751 corp: 31/757b lim: 40 exec/s: 49 rss: 71Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:38.085 [2024-07-13 19:55:25.654147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.654175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.085 [2024-07-13 19:55:25.654232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00dcffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.654245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.085 [2024-07-13 19:55:25.654303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.654316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.085 [2024-07-13 19:55:25.654374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.654387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.085 [2024-07-13 19:55:25.654449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000660a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.654462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.085 #50 NEW cov: 12064 ft: 14780 corp: 32/797b lim: 40 exec/s: 50 rss: 71Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:38.085 [2024-07-13 19:55:25.704335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.704360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.085 [2024-07-13 19:55:25.704417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00dcffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.704430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.085 [2024-07-13 19:55:25.704487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.704517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.085 [2024-07-13 19:55:25.704570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.704584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.085 [2024-07-13 19:55:25.704638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000660a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.085 [2024-07-13 19:55:25.704651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.085 #51 NEW cov: 12064 ft: 14844 corp: 33/837b lim: 40 exec/s: 51 rss: 71Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:38.344 [2024-07-13 19:55:25.754408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.345 [2024-07-13 19:55:25.754432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.345 [2024-07-13 19:55:25.754493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00dc2800 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.345 [2024-07-13 19:55:25.754510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.345 [2024-07-13 19:55:25.754563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.345 [2024-07-13 19:55:25.754576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.345 [2024-07-13 19:55:25.754627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.345 [2024-07-13 19:55:25.754640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.345 #52 NEW cov: 12064 ft: 14861 corp: 34/873b lim: 40 exec/s: 52 rss: 71Mb L: 36/40 MS: 1 InsertByte- 00:07:38.345 [2024-07-13 19:55:25.794479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.345 [2024-07-13 19:55:25.794503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.345 [2024-07-13 19:55:25.794556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.345 [2024-07-13 19:55:25.794569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.345 [2024-07-13 19:55:25.794623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:b7000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.345 [2024-07-13 19:55:25.794637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.345 [2024-07-13 19:55:25.794691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.345 [2024-07-13 19:55:25.794704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.345 #53 NEW cov: 12064 ft: 14878 corp: 35/908b lim: 40 exec/s: 53 rss: 71Mb L: 35/40 MS: 1 ChangeByte- 00:07:38.345 [2024-07-13 19:55:25.844402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.345 [2024-07-13 19:55:25.844426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.345 [2024-07-13 19:55:25.844484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.345 [2024-07-13 19:55:25.844498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.345 #54 NEW cov: 12064 ft: 14884 corp: 36/929b lim: 40 exec/s: 54 rss: 72Mb L: 21/40 MS: 1 CopyPart- 00:07:38.345 [2024-07-13 19:55:25.894411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000040 cdw11:00070000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.345 [2024-07-13 19:55:25.894434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.345 #60 NEW cov: 12064 ft: 14890 corp: 37/941b lim: 40 exec/s: 60 rss: 72Mb L: 12/40 MS: 1 ShuffleBytes- 00:07:38.345 [2024-07-13 19:55:25.934524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:40000800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.345 [2024-07-13 19:55:25.934549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.345 #61 NEW cov: 12064 ft: 14927 corp: 38/949b lim: 40 exec/s: 61 rss: 72Mb L: 8/40 MS: 1 ChangeBinInt- 00:07:38.345 [2024-07-13 19:55:25.974645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:3b400007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.345 [2024-07-13 19:55:25.974670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.605 #62 NEW cov: 12064 ft: 14937 corp: 39/959b lim: 40 exec/s: 62 rss: 72Mb L: 10/40 MS: 1 InsertByte- 00:07:38.605 [2024-07-13 19:55:26.025193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:03000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.605 [2024-07-13 19:55:26.025218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.605 [2024-07-13 19:55:26.025273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00ffffff cdw11:ffffdc00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.605 [2024-07-13 19:55:26.025287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.605 [2024-07-13 19:55:26.025341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.605 [2024-07-13 19:55:26.025354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.605 [2024-07-13 19:55:26.025408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00002000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.605 [2024-07-13 19:55:26.025421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:38.605 [2024-07-13 19:55:26.025472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000660a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:38.605 [2024-07-13 19:55:26.025485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:38.605 #63 NEW cov: 12064 ft: 14944 corp: 40/999b lim: 40 exec/s: 31 rss: 72Mb L: 40/40 MS: 1 ChangeBit- 00:07:38.605 #63 DONE cov: 12064 ft: 14944 corp: 40/999b lim: 40 exec/s: 31 rss: 72Mb 00:07:38.605 Done 63 runs in 2 second(s) 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4411 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:38.605 19:55:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:07:38.605 [2024-07-13 19:55:26.216299] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:38.605 [2024-07-13 19:55:26.216382] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3678384 ] 00:07:38.605 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.864 [2024-07-13 19:55:26.467576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.864 [2024-07-13 19:55:26.495723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.122 [2024-07-13 19:55:26.547959] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.122 [2024-07-13 19:55:26.564269] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:39.122 INFO: Running with entropic power schedule (0xFF, 100). 00:07:39.122 INFO: Seed: 3812158890 00:07:39.122 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:39.122 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:39.122 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:39.122 INFO: A corpus is not provided, starting from an empty corpus 00:07:39.122 #2 INITED exec/s: 0 rss: 62Mb 00:07:39.122 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:39.122 This may also happen if the target rejected all inputs we tried so far 00:07:39.122 [2024-07-13 19:55:26.635587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.123 [2024-07-13 19:55:26.635624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.123 [2024-07-13 19:55:26.635717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.123 [2024-07-13 19:55:26.635732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.123 [2024-07-13 19:55:26.635803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.123 [2024-07-13 19:55:26.635818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.123 [2024-07-13 19:55:26.635895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.123 [2024-07-13 19:55:26.635909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.382 NEW_FUNC[1/692]: 0x4a2590 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:39.382 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:39.382 #10 NEW cov: 11832 ft: 11833 corp: 2/39b lim: 40 exec/s: 0 rss: 68Mb L: 38/38 MS: 3 CopyPart-InsertByte-InsertRepeatedBytes- 00:07:39.382 [2024-07-13 19:55:26.975982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.382 [2024-07-13 19:55:26.976034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.382 [2024-07-13 19:55:26.976178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.382 [2024-07-13 19:55:26.976199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.382 [2024-07-13 19:55:26.976338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.382 [2024-07-13 19:55:26.976361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.382 [2024-07-13 19:55:26.976514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.382 [2024-07-13 19:55:26.976534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.382 [2024-07-13 19:55:26.976681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.382 [2024-07-13 19:55:26.976704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.382 #21 NEW cov: 11962 ft: 12522 corp: 3/79b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 CopyPart- 00:07:39.382 [2024-07-13 19:55:27.025359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.382 [2024-07-13 19:55:27.025387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.382 [2024-07-13 19:55:27.025530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.382 [2024-07-13 19:55:27.025546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.382 [2024-07-13 19:55:27.025682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.382 [2024-07-13 19:55:27.025699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.642 #23 NEW cov: 11968 ft: 13167 corp: 4/106b lim: 40 exec/s: 0 rss: 69Mb L: 27/40 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:39.642 [2024-07-13 19:55:27.065465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.065505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.065627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.065646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.065767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.065785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.642 #24 NEW cov: 12053 ft: 13440 corp: 5/132b lim: 40 exec/s: 0 rss: 69Mb L: 26/40 MS: 1 EraseBytes- 00:07:39.642 [2024-07-13 19:55:27.115962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.115995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.116138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.116155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.116293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.116311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.116451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.116469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.642 #25 NEW cov: 12053 ft: 13589 corp: 6/170b lim: 40 exec/s: 0 rss: 69Mb L: 38/40 MS: 1 ChangeByte- 00:07:39.642 [2024-07-13 19:55:27.155756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.155783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.155914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.155933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.156063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.156081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.642 #26 NEW cov: 12053 ft: 13627 corp: 7/197b lim: 40 exec/s: 0 rss: 69Mb L: 27/40 MS: 1 ChangeBit- 00:07:39.642 [2024-07-13 19:55:27.206515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.206542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.206671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.206687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.206807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.206824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.206938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.206953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.207073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.207092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.642 #27 NEW cov: 12053 ft: 13678 corp: 8/237b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 CopyPart- 00:07:39.642 [2024-07-13 19:55:27.246549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.246576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.246708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.246726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.246854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.246871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.246997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00fb0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.247013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.247143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.247158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.642 #28 NEW cov: 12053 ft: 13764 corp: 9/277b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:39.642 [2024-07-13 19:55:27.286156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.286184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.286320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.286337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.642 [2024-07-13 19:55:27.286468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.642 [2024-07-13 19:55:27.286486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.902 #30 NEW cov: 12053 ft: 13814 corp: 10/306b lim: 40 exec/s: 0 rss: 69Mb L: 29/40 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:39.902 [2024-07-13 19:55:27.326524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.902 [2024-07-13 19:55:27.326549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.902 [2024-07-13 19:55:27.326678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000000d1 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.902 [2024-07-13 19:55:27.326696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.902 [2024-07-13 19:55:27.326826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.902 [2024-07-13 19:55:27.326845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.902 [2024-07-13 19:55:27.326975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.902 [2024-07-13 19:55:27.326992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.902 #31 NEW cov: 12053 ft: 13918 corp: 11/345b lim: 40 exec/s: 0 rss: 69Mb L: 39/40 MS: 1 CopyPart- 00:07:39.902 [2024-07-13 19:55:27.376411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.376437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.903 [2024-07-13 19:55:27.376583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.376600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.903 [2024-07-13 19:55:27.376734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.376750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.903 #37 NEW cov: 12053 ft: 13933 corp: 12/369b lim: 40 exec/s: 0 rss: 70Mb L: 24/40 MS: 1 EraseBytes- 00:07:39.903 [2024-07-13 19:55:27.427119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.427145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.903 [2024-07-13 19:55:27.427280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.427297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.903 [2024-07-13 19:55:27.427421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.427439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.903 [2024-07-13 19:55:27.427570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000800 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.427588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.903 [2024-07-13 19:55:27.427719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.427733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.903 #38 NEW cov: 12053 ft: 13944 corp: 13/409b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeBit- 00:07:39.903 [2024-07-13 19:55:27.467227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.467253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.903 [2024-07-13 19:55:27.467389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.467409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.903 [2024-07-13 19:55:27.467552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.467569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.903 [2024-07-13 19:55:27.467700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00007400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.467718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.903 [2024-07-13 19:55:27.467844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.467861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:39.903 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:39.903 #39 NEW cov: 12076 ft: 14036 corp: 14/449b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 CopyPart- 00:07:39.903 [2024-07-13 19:55:27.527050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.527076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.903 [2024-07-13 19:55:27.527207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000000d1 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.527223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.903 [2024-07-13 19:55:27.527363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.527379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.903 [2024-07-13 19:55:27.527520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000000d1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.903 [2024-07-13 19:55:27.527536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.903 #40 NEW cov: 12076 ft: 14042 corp: 15/483b lim: 40 exec/s: 0 rss: 70Mb L: 34/40 MS: 1 EraseBytes- 00:07:40.163 [2024-07-13 19:55:27.577431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00007400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.577462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.577605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.577621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.577749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.577765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.577896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.577915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.578055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00d1000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.578072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.163 #41 NEW cov: 12076 ft: 14067 corp: 16/523b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 CopyPart- 00:07:40.163 [2024-07-13 19:55:27.617354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.617380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.617523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.617539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.617676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.617692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.617825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.617841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.163 #42 NEW cov: 12076 ft: 14092 corp: 17/561b lim: 40 exec/s: 42 rss: 70Mb L: 38/40 MS: 1 ChangeBinInt- 00:07:40.163 [2024-07-13 19:55:27.657464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.657489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.657612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.657630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.657757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.657773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.657904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.657920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.163 #43 NEW cov: 12076 ft: 14105 corp: 18/599b lim: 40 exec/s: 43 rss: 70Mb L: 38/40 MS: 1 ShuffleBytes- 00:07:40.163 [2024-07-13 19:55:27.697325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:00000018 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.697353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.697491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.697510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.697646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.697663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.163 #44 NEW cov: 12076 ft: 14184 corp: 19/623b lim: 40 exec/s: 44 rss: 70Mb L: 24/40 MS: 1 ChangeBinInt- 00:07:40.163 [2024-07-13 19:55:27.747747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.747773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.747914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000000d1 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.747931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.748066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.748081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.748218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.748235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.163 #45 NEW cov: 12076 ft: 14193 corp: 20/662b lim: 40 exec/s: 45 rss: 70Mb L: 39/40 MS: 1 ShuffleBytes- 00:07:40.163 [2024-07-13 19:55:27.787626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00007400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.787652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.787782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.787798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.163 [2024-07-13 19:55:27.787932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000d100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.163 [2024-07-13 19:55:27.787948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.423 #46 NEW cov: 12076 ft: 14216 corp: 21/687b lim: 40 exec/s: 46 rss: 70Mb L: 25/40 MS: 1 EraseBytes- 00:07:40.423 [2024-07-13 19:55:27.837749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:00000400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.837776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:27.837905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.837923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:27.838052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.838071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.423 #47 NEW cov: 12076 ft: 14222 corp: 22/714b lim: 40 exec/s: 47 rss: 70Mb L: 27/40 MS: 1 ChangeBit- 00:07:40.423 [2024-07-13 19:55:27.878490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00007400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.878517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:27.878653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.878671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:27.878787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.878803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:27.878932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:2b000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.878950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:27.879076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00d1000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.879094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.423 #48 NEW cov: 12076 ft: 14250 corp: 23/754b lim: 40 exec/s: 48 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:07:40.423 [2024-07-13 19:55:27.918306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.918334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:27.918463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.918480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:27.918610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.918627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:27.918745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.918762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.423 #49 NEW cov: 12076 ft: 14290 corp: 24/791b lim: 40 exec/s: 49 rss: 70Mb L: 37/40 MS: 1 EraseBytes- 00:07:40.423 [2024-07-13 19:55:27.958499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.958526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:27.958656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.958674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:27.958790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.958807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:27.958930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.958946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.423 #50 NEW cov: 12076 ft: 14316 corp: 25/829b lim: 40 exec/s: 50 rss: 70Mb L: 38/40 MS: 1 CopyPart- 00:07:40.423 [2024-07-13 19:55:27.998562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:77777700 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.998589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:27.998729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000d100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.998747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:27.998876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.998893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:27.999030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:27.999048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.423 #51 NEW cov: 12076 ft: 14338 corp: 26/866b lim: 40 exec/s: 51 rss: 70Mb L: 37/40 MS: 1 InsertRepeatedBytes- 00:07:40.423 [2024-07-13 19:55:28.048664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:28.048691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:28.048838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000000d1 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:28.048855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:28.048998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:28.049015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.423 [2024-07-13 19:55:28.049146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000000d1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.423 [2024-07-13 19:55:28.049163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.423 #52 NEW cov: 12076 ft: 14410 corp: 27/900b lim: 40 exec/s: 52 rss: 70Mb L: 34/40 MS: 1 ShuffleBytes- 00:07:40.683 [2024-07-13 19:55:28.088750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.088781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.088912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000000d1 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.088929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.089059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.089076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.089204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:0000002b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.089219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.683 #53 NEW cov: 12076 ft: 14433 corp: 28/935b lim: 40 exec/s: 53 rss: 70Mb L: 35/40 MS: 1 InsertByte- 00:07:40.683 [2024-07-13 19:55:28.128900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.128927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.129062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.129080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.129213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:81818181 cdw11:8181ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.129230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.129372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.129389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.683 #54 NEW cov: 12076 ft: 14462 corp: 29/970b lim: 40 exec/s: 54 rss: 70Mb L: 35/40 MS: 1 InsertRepeatedBytes- 00:07:40.683 [2024-07-13 19:55:28.178854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.178881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.179026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.179044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.179178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.179196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.683 #55 NEW cov: 12076 ft: 14470 corp: 30/996b lim: 40 exec/s: 55 rss: 70Mb L: 26/40 MS: 1 ShuffleBytes- 00:07:40.683 [2024-07-13 19:55:28.229228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.229261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.229396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.229414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.229549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.229568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.229693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.229709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.683 #56 NEW cov: 12076 ft: 14483 corp: 31/1035b lim: 40 exec/s: 56 rss: 70Mb L: 39/40 MS: 1 InsertByte- 00:07:40.683 [2024-07-13 19:55:28.279352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.279382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.279509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.279526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.279650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:81818181 cdw11:8181ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.279667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.279798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.279814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.683 #57 NEW cov: 12076 ft: 14501 corp: 32/1070b lim: 40 exec/s: 57 rss: 70Mb L: 35/40 MS: 1 CopyPart- 00:07:40.683 [2024-07-13 19:55:28.329258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:00000400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.329285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.329420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.329436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.683 [2024-07-13 19:55:28.329582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.683 [2024-07-13 19:55:28.329598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.942 #58 NEW cov: 12076 ft: 14513 corp: 33/1100b lim: 40 exec/s: 58 rss: 70Mb L: 30/40 MS: 1 CopyPart- 00:07:40.943 [2024-07-13 19:55:28.379899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.379927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.943 [2024-07-13 19:55:28.380064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.380080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.943 [2024-07-13 19:55:28.380215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.380231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.943 [2024-07-13 19:55:28.380363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00fb0000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.380377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.943 [2024-07-13 19:55:28.380511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.380528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.943 #59 NEW cov: 12076 ft: 14517 corp: 34/1140b lim: 40 exec/s: 59 rss: 70Mb L: 40/40 MS: 1 CrossOver- 00:07:40.943 [2024-07-13 19:55:28.429778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.429803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.943 [2024-07-13 19:55:28.429934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.429949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.943 [2024-07-13 19:55:28.430079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.430097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.943 [2024-07-13 19:55:28.430229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.430246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.943 #60 NEW cov: 12076 ft: 14524 corp: 35/1179b lim: 40 exec/s: 60 rss: 70Mb L: 39/40 MS: 1 InsertByte- 00:07:40.943 [2024-07-13 19:55:28.470125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.470152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.943 [2024-07-13 19:55:28.470279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.470297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.943 [2024-07-13 19:55:28.470429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.470449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.943 [2024-07-13 19:55:28.470590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00007400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.470607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.943 [2024-07-13 19:55:28.470739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.470759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:40.943 #61 NEW cov: 12076 ft: 14557 corp: 36/1219b lim: 40 exec/s: 61 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:40.943 [2024-07-13 19:55:28.519823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.519851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.943 [2024-07-13 19:55:28.519989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000fb00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.520006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.943 [2024-07-13 19:55:28.520133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.520149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.943 #62 NEW cov: 12076 ft: 14565 corp: 37/1248b lim: 40 exec/s: 62 rss: 70Mb L: 29/40 MS: 1 EraseBytes- 00:07:40.943 [2024-07-13 19:55:28.559648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:00000400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.559673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.943 [2024-07-13 19:55:28.559804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.943 [2024-07-13 19:55:28.559820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.943 #63 NEW cov: 12076 ft: 14786 corp: 38/1265b lim: 40 exec/s: 63 rss: 71Mb L: 17/40 MS: 1 EraseBytes- 00:07:41.202 [2024-07-13 19:55:28.610374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:74000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.202 [2024-07-13 19:55:28.610401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.202 [2024-07-13 19:55:28.610536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000000d1 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.202 [2024-07-13 19:55:28.610553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.202 [2024-07-13 19:55:28.610691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.202 [2024-07-13 19:55:28.610707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.202 [2024-07-13 19:55:28.610826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00002200 cdw11:000000d1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.202 [2024-07-13 19:55:28.610847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.202 #64 pulse cov: 12076 ft: 14795 corp: 38/1265b lim: 40 exec/s: 32 rss: 71Mb 00:07:41.202 #64 NEW cov: 12076 ft: 14795 corp: 39/1299b lim: 40 exec/s: 32 rss: 71Mb L: 34/40 MS: 1 ChangeBinInt- 00:07:41.202 #64 DONE cov: 12076 ft: 14795 corp: 39/1299b lim: 40 exec/s: 32 rss: 71Mb 00:07:41.202 Done 64 runs in 2 second(s) 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4412 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:41.202 19:55:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:07:41.202 [2024-07-13 19:55:28.789567] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:41.202 [2024-07-13 19:55:28.789653] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3678917 ] 00:07:41.202 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.460 [2024-07-13 19:55:29.045087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.461 [2024-07-13 19:55:29.076282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.720 [2024-07-13 19:55:29.128463] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.720 [2024-07-13 19:55:29.144760] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:41.720 INFO: Running with entropic power schedule (0xFF, 100). 00:07:41.720 INFO: Seed: 2098211508 00:07:41.720 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:41.720 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:41.720 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:41.720 INFO: A corpus is not provided, starting from an empty corpus 00:07:41.720 #2 INITED exec/s: 0 rss: 62Mb 00:07:41.720 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:41.720 This may also happen if the target rejected all inputs we tried so far 00:07:41.720 [2024-07-13 19:55:29.194261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.720 [2024-07-13 19:55:29.194289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.720 [2024-07-13 19:55:29.194345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.720 [2024-07-13 19:55:29.194360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.720 [2024-07-13 19:55:29.194417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.720 [2024-07-13 19:55:29.194430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.979 NEW_FUNC[1/692]: 0x4a4300 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:41.979 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:41.979 #6 NEW cov: 11826 ft: 11827 corp: 2/30b lim: 40 exec/s: 0 rss: 69Mb L: 29/29 MS: 4 ChangeBinInt-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:41.979 [2024-07-13 19:55:29.505152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.979 [2024-07-13 19:55:29.505195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.979 [2024-07-13 19:55:29.505267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.979 [2024-07-13 19:55:29.505285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.979 [2024-07-13 19:55:29.505352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.979 [2024-07-13 19:55:29.505369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.979 #17 NEW cov: 11960 ft: 12480 corp: 3/60b lim: 40 exec/s: 0 rss: 69Mb L: 30/30 MS: 1 CrossOver- 00:07:41.979 [2024-07-13 19:55:29.555152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.979 [2024-07-13 19:55:29.555182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.979 [2024-07-13 19:55:29.555259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:10ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.979 [2024-07-13 19:55:29.555274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.979 [2024-07-13 19:55:29.555334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.979 [2024-07-13 19:55:29.555347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.979 #18 NEW cov: 11966 ft: 12638 corp: 4/90b lim: 40 exec/s: 0 rss: 69Mb L: 30/30 MS: 1 CrossOver- 00:07:41.979 [2024-07-13 19:55:29.605394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.979 [2024-07-13 19:55:29.605421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.979 [2024-07-13 19:55:29.605505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.979 [2024-07-13 19:55:29.605520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.979 [2024-07-13 19:55:29.605581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.979 [2024-07-13 19:55:29.605595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.979 [2024-07-13 19:55:29.605658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.979 [2024-07-13 19:55:29.605672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:41.979 #19 NEW cov: 12051 ft: 13250 corp: 5/128b lim: 40 exec/s: 0 rss: 69Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:07:42.238 [2024-07-13 19:55:29.645388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.238 [2024-07-13 19:55:29.645414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.238 [2024-07-13 19:55:29.645491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:10ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.238 [2024-07-13 19:55:29.645506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.238 [2024-07-13 19:55:29.645564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.238 [2024-07-13 19:55:29.645578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.238 #20 NEW cov: 12051 ft: 13299 corp: 6/158b lim: 40 exec/s: 0 rss: 69Mb L: 30/38 MS: 1 CopyPart- 00:07:42.238 [2024-07-13 19:55:29.695489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:bfffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.238 [2024-07-13 19:55:29.695515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.238 [2024-07-13 19:55:29.695591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:10ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.238 [2024-07-13 19:55:29.695604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.238 [2024-07-13 19:55:29.695665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.238 [2024-07-13 19:55:29.695678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.238 #21 NEW cov: 12051 ft: 13341 corp: 7/188b lim: 40 exec/s: 0 rss: 69Mb L: 30/38 MS: 1 ChangeBit- 00:07:42.238 [2024-07-13 19:55:29.745329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.238 [2024-07-13 19:55:29.745354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.238 #22 NEW cov: 12051 ft: 14198 corp: 8/203b lim: 40 exec/s: 0 rss: 70Mb L: 15/38 MS: 1 EraseBytes- 00:07:42.238 [2024-07-13 19:55:29.785815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.238 [2024-07-13 19:55:29.785843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.238 [2024-07-13 19:55:29.785920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.238 [2024-07-13 19:55:29.785935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.238 [2024-07-13 19:55:29.785995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fffdffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.238 [2024-07-13 19:55:29.786008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.238 #23 NEW cov: 12051 ft: 14302 corp: 9/233b lim: 40 exec/s: 0 rss: 70Mb L: 30/38 MS: 1 ChangeBit- 00:07:42.238 [2024-07-13 19:55:29.826082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.239 [2024-07-13 19:55:29.826108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.239 [2024-07-13 19:55:29.826169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.239 [2024-07-13 19:55:29.826183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.239 [2024-07-13 19:55:29.826241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:acacacac cdw11:acacffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.239 [2024-07-13 19:55:29.826255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.239 [2024-07-13 19:55:29.826315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:10ffffff cdw11:ffffacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.239 [2024-07-13 19:55:29.826329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.239 #24 NEW cov: 12051 ft: 14353 corp: 10/271b lim: 40 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 CrossOver- 00:07:42.239 [2024-07-13 19:55:29.876069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fffaffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.239 [2024-07-13 19:55:29.876094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.239 [2024-07-13 19:55:29.876172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.239 [2024-07-13 19:55:29.876187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.239 [2024-07-13 19:55:29.876247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.239 [2024-07-13 19:55:29.876260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.239 #25 NEW cov: 12051 ft: 14392 corp: 11/300b lim: 40 exec/s: 0 rss: 70Mb L: 29/38 MS: 1 ChangeBinInt- 00:07:42.498 [2024-07-13 19:55:29.916136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:29.916161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.498 [2024-07-13 19:55:29.916221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:29.916240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.498 [2024-07-13 19:55:29.916301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:29.916315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.498 #26 NEW cov: 12051 ft: 14406 corp: 12/330b lim: 40 exec/s: 0 rss: 70Mb L: 30/38 MS: 1 CrossOver- 00:07:42.498 [2024-07-13 19:55:29.956305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:29.956329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.498 [2024-07-13 19:55:29.956391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:29.956404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.498 [2024-07-13 19:55:29.956469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:29.956483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.498 #27 NEW cov: 12051 ft: 14422 corp: 13/360b lim: 40 exec/s: 0 rss: 70Mb L: 30/38 MS: 1 CrossOver- 00:07:42.498 [2024-07-13 19:55:30.006432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:30.006463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.498 [2024-07-13 19:55:30.006525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:10ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:30.006540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.498 [2024-07-13 19:55:30.006602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:30.006616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.498 #28 NEW cov: 12051 ft: 14451 corp: 14/387b lim: 40 exec/s: 0 rss: 70Mb L: 27/38 MS: 1 EraseBytes- 00:07:42.498 [2024-07-13 19:55:30.046234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:30.046260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.498 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:42.498 #29 NEW cov: 12074 ft: 14568 corp: 15/402b lim: 40 exec/s: 0 rss: 70Mb L: 15/38 MS: 1 CrossOver- 00:07:42.498 [2024-07-13 19:55:30.096684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:bfffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:30.096712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.498 [2024-07-13 19:55:30.096773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:10ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:30.096787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.498 [2024-07-13 19:55:30.096849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:30.096862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.498 #30 NEW cov: 12074 ft: 14591 corp: 16/426b lim: 40 exec/s: 0 rss: 70Mb L: 24/38 MS: 1 CrossOver- 00:07:42.498 [2024-07-13 19:55:30.146842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:bfffffff cdw11:fffffffe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:30.146868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.498 [2024-07-13 19:55:30.146929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:10ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:30.146942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.498 [2024-07-13 19:55:30.147000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.498 [2024-07-13 19:55:30.147013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.757 #31 NEW cov: 12074 ft: 14620 corp: 17/450b lim: 40 exec/s: 31 rss: 70Mb L: 24/38 MS: 1 ChangeBit- 00:07:42.757 [2024-07-13 19:55:30.196615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.757 [2024-07-13 19:55:30.196639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.757 #32 NEW cov: 12074 ft: 14687 corp: 18/465b lim: 40 exec/s: 32 rss: 70Mb L: 15/38 MS: 1 ShuffleBytes- 00:07:42.757 [2024-07-13 19:55:30.246790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:bfffffff cdw11:fffffffe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.757 [2024-07-13 19:55:30.246815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.757 #33 NEW cov: 12074 ft: 14692 corp: 19/480b lim: 40 exec/s: 33 rss: 70Mb L: 15/38 MS: 1 EraseBytes- 00:07:42.757 [2024-07-13 19:55:30.297488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.757 [2024-07-13 19:55:30.297512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.757 [2024-07-13 19:55:30.297571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:10ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.757 [2024-07-13 19:55:30.297584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.757 [2024-07-13 19:55:30.297657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.757 [2024-07-13 19:55:30.297671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.757 [2024-07-13 19:55:30.297728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.757 [2024-07-13 19:55:30.297742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.757 #34 NEW cov: 12074 ft: 14705 corp: 20/519b lim: 40 exec/s: 34 rss: 70Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:07:42.757 [2024-07-13 19:55:30.337355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:bfffffff cdw11:fffffffe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.757 [2024-07-13 19:55:30.337381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.757 [2024-07-13 19:55:30.337440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:10ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.757 [2024-07-13 19:55:30.337457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.757 [2024-07-13 19:55:30.337514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffff37ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.757 [2024-07-13 19:55:30.337528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.757 #35 NEW cov: 12074 ft: 14711 corp: 21/543b lim: 40 exec/s: 35 rss: 70Mb L: 24/39 MS: 1 ChangeByte- 00:07:42.757 [2024-07-13 19:55:30.377511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:bfffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.757 [2024-07-13 19:55:30.377535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.757 [2024-07-13 19:55:30.377593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:10ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.757 [2024-07-13 19:55:30.377607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.757 [2024-07-13 19:55:30.377662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:10ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.757 [2024-07-13 19:55:30.377675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.757 #36 NEW cov: 12074 ft: 14736 corp: 22/571b lim: 40 exec/s: 36 rss: 70Mb L: 28/39 MS: 1 CrossOver- 00:07:42.757 [2024-07-13 19:55:30.417294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.757 [2024-07-13 19:55:30.417320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.016 #37 NEW cov: 12074 ft: 14793 corp: 23/586b lim: 40 exec/s: 37 rss: 70Mb L: 15/39 MS: 1 CrossOver- 00:07:43.016 [2024-07-13 19:55:30.467431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.016 [2024-07-13 19:55:30.467461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.016 #38 NEW cov: 12074 ft: 14799 corp: 24/601b lim: 40 exec/s: 38 rss: 70Mb L: 15/39 MS: 1 ShuffleBytes- 00:07:43.016 [2024-07-13 19:55:30.517555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.016 [2024-07-13 19:55:30.517580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.016 #39 NEW cov: 12074 ft: 14849 corp: 25/611b lim: 40 exec/s: 39 rss: 70Mb L: 10/39 MS: 1 EraseBytes- 00:07:43.016 [2024-07-13 19:55:30.558163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.016 [2024-07-13 19:55:30.558189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.016 [2024-07-13 19:55:30.558263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:10ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.016 [2024-07-13 19:55:30.558280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.016 [2024-07-13 19:55:30.558336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.016 [2024-07-13 19:55:30.558350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.016 [2024-07-13 19:55:30.558410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.016 [2024-07-13 19:55:30.558423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.016 #40 NEW cov: 12074 ft: 14866 corp: 26/648b lim: 40 exec/s: 40 rss: 70Mb L: 37/39 MS: 1 CrossOver- 00:07:43.016 [2024-07-13 19:55:30.598313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.016 [2024-07-13 19:55:30.598338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.016 [2024-07-13 19:55:30.598401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffff0000 cdw11:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.016 [2024-07-13 19:55:30.598415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.016 [2024-07-13 19:55:30.598475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ff10ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.016 [2024-07-13 19:55:30.598489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.016 [2024-07-13 19:55:30.598546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.016 [2024-07-13 19:55:30.598559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.016 #41 NEW cov: 12074 ft: 14958 corp: 27/683b lim: 40 exec/s: 41 rss: 70Mb L: 35/39 MS: 1 InsertRepeatedBytes- 00:07:43.016 [2024-07-13 19:55:30.637867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.016 [2024-07-13 19:55:30.637892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.016 #42 NEW cov: 12074 ft: 14973 corp: 28/698b lim: 40 exec/s: 42 rss: 70Mb L: 15/39 MS: 1 CopyPart- 00:07:43.275 [2024-07-13 19:55:30.678392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7a7a7a7a cdw11:7a7a7a7a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.678417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.275 [2024-07-13 19:55:30.678480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:7a7a7a7a cdw11:7a7a7a7a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.678495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.275 [2024-07-13 19:55:30.678554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:7a7a7a7a cdw11:7a7a7a7a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.678568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.275 #44 NEW cov: 12074 ft: 14991 corp: 29/729b lim: 40 exec/s: 44 rss: 70Mb L: 31/39 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:43.275 [2024-07-13 19:55:30.718486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.718515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.275 [2024-07-13 19:55:30.718588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.718602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.275 [2024-07-13 19:55:30.718673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.718686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.275 #45 NEW cov: 12074 ft: 15026 corp: 30/759b lim: 40 exec/s: 45 rss: 70Mb L: 30/39 MS: 1 ChangeBinInt- 00:07:43.275 [2024-07-13 19:55:30.768630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.768654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.275 [2024-07-13 19:55:30.768713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.768726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.275 [2024-07-13 19:55:30.768782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fffdffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.768796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.275 #46 NEW cov: 12074 ft: 15033 corp: 31/789b lim: 40 exec/s: 46 rss: 71Mb L: 30/39 MS: 1 CrossOver- 00:07:43.275 [2024-07-13 19:55:30.818872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffbfffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.818899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.275 [2024-07-13 19:55:30.818974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:10ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.818988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.275 [2024-07-13 19:55:30.819046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.819060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.275 [2024-07-13 19:55:30.819120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.819133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.275 #47 NEW cov: 12074 ft: 15040 corp: 32/826b lim: 40 exec/s: 47 rss: 71Mb L: 37/39 MS: 1 ChangeBit- 00:07:43.275 [2024-07-13 19:55:30.869037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.869062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.275 [2024-07-13 19:55:30.869139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.869155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.275 [2024-07-13 19:55:30.869215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.869229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.275 [2024-07-13 19:55:30.869289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.869302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.275 #48 NEW cov: 12074 ft: 15050 corp: 33/863b lim: 40 exec/s: 48 rss: 71Mb L: 37/39 MS: 1 CrossOver- 00:07:43.275 [2024-07-13 19:55:30.908940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.908965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.275 [2024-07-13 19:55:30.909038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffff1e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.909052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.275 [2024-07-13 19:55:30.909111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.275 [2024-07-13 19:55:30.909124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.275 #49 NEW cov: 12074 ft: 15063 corp: 34/893b lim: 40 exec/s: 49 rss: 71Mb L: 30/39 MS: 1 ChangeBinInt- 00:07:43.535 [2024-07-13 19:55:30.949072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:bfffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:30.949098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.535 [2024-07-13 19:55:30.949158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:30.949172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.535 [2024-07-13 19:55:30.949234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ff10ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:30.949247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.535 #50 NEW cov: 12074 ft: 15069 corp: 35/922b lim: 40 exec/s: 50 rss: 71Mb L: 29/39 MS: 1 CopyPart- 00:07:43.535 [2024-07-13 19:55:30.989041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:30.989066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.535 [2024-07-13 19:55:30.989128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:30.989142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.535 #51 NEW cov: 12074 ft: 15277 corp: 36/942b lim: 40 exec/s: 51 rss: 71Mb L: 20/39 MS: 1 EraseBytes- 00:07:43.535 [2024-07-13 19:55:31.029292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:31.029318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.535 [2024-07-13 19:55:31.029377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:31.029391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.535 [2024-07-13 19:55:31.029451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff5b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:31.029465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.535 #52 NEW cov: 12074 ft: 15288 corp: 37/972b lim: 40 exec/s: 52 rss: 71Mb L: 30/39 MS: 1 ChangeByte- 00:07:43.535 [2024-07-13 19:55:31.069408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:31.069433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.535 [2024-07-13 19:55:31.069509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:31.069525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.535 [2024-07-13 19:55:31.069583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:dfffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:31.069596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.535 #53 NEW cov: 12074 ft: 15397 corp: 38/1002b lim: 40 exec/s: 53 rss: 71Mb L: 30/39 MS: 1 ChangeBit- 00:07:43.535 [2024-07-13 19:55:31.109712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aacacac cdw11:acacacac SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:31.109737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.535 [2024-07-13 19:55:31.109812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acacacff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:31.109827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.535 [2024-07-13 19:55:31.109884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffff0a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:31.109897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.535 [2024-07-13 19:55:31.109957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffacff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:31.109970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.535 #54 NEW cov: 12074 ft: 15470 corp: 39/1038b lim: 40 exec/s: 54 rss: 71Mb L: 36/39 MS: 1 CrossOver- 00:07:43.535 [2024-07-13 19:55:31.149801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:31.149827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.535 [2024-07-13 19:55:31.149891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:10ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:31.149905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.535 [2024-07-13 19:55:31.149965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:31.149978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.535 [2024-07-13 19:55:31.150039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:fffdffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.535 [2024-07-13 19:55:31.150052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.535 #55 NEW cov: 12074 ft: 15496 corp: 40/1077b lim: 40 exec/s: 27 rss: 71Mb L: 39/39 MS: 1 ChangeBit- 00:07:43.535 #55 DONE cov: 12074 ft: 15496 corp: 40/1077b lim: 40 exec/s: 27 rss: 71Mb 00:07:43.535 Done 55 runs in 2 second(s) 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4413 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:43.795 19:55:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:07:43.795 [2024-07-13 19:55:31.343009] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:43.795 [2024-07-13 19:55:31.343083] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3679398 ] 00:07:43.795 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.054 [2024-07-13 19:55:31.604590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.054 [2024-07-13 19:55:31.634723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.054 [2024-07-13 19:55:31.687079] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.054 [2024-07-13 19:55:31.703392] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:44.312 INFO: Running with entropic power schedule (0xFF, 100). 00:07:44.312 INFO: Seed: 363244151 00:07:44.312 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:44.312 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:44.312 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:44.312 INFO: A corpus is not provided, starting from an empty corpus 00:07:44.312 #2 INITED exec/s: 0 rss: 63Mb 00:07:44.312 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:44.312 This may also happen if the target rejected all inputs we tried so far 00:07:44.312 [2024-07-13 19:55:31.749034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.312 [2024-07-13 19:55:31.749064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.312 [2024-07-13 19:55:31.749120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.312 [2024-07-13 19:55:31.749135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.312 [2024-07-13 19:55:31.749188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.312 [2024-07-13 19:55:31.749202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.313 [2024-07-13 19:55:31.749257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.313 [2024-07-13 19:55:31.749270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.313 [2024-07-13 19:55:31.749326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.313 [2024-07-13 19:55:31.749339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.572 NEW_FUNC[1/691]: 0x4a5ec0 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:44.572 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:44.572 #6 NEW cov: 11818 ft: 11819 corp: 2/41b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 4 ChangeByte-ChangeBit-ChangeBinInt-InsertRepeatedBytes- 00:07:44.572 [2024-07-13 19:55:32.069911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.069945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.070005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.070020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.070076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.070089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.070150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.070163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.070219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:6e6e7e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.070233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.572 #7 NEW cov: 11948 ft: 12409 corp: 3/81b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 ChangeBit- 00:07:44.572 [2024-07-13 19:55:32.119876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.119903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.119976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.119990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.120044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.120058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.120111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.120125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.120180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:6e6e7e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.120193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.572 #8 NEW cov: 11954 ft: 12682 corp: 4/121b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 ChangeByte- 00:07:44.572 [2024-07-13 19:55:32.170004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.170030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.170099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.170113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.170171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.170185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.170239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.170252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.170311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2e6e7e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.170325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.572 #14 NEW cov: 12039 ft: 12894 corp: 5/161b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 ChangeBit- 00:07:44.572 [2024-07-13 19:55:32.220230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.220254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.220323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.220336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.220388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.220402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.220459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.220472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.572 [2024-07-13 19:55:32.220534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2e6e7e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.572 [2024-07-13 19:55:32.220547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.831 #15 NEW cov: 12039 ft: 12992 corp: 6/201b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:44.831 [2024-07-13 19:55:32.270347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.831 [2024-07-13 19:55:32.270371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.831 [2024-07-13 19:55:32.270447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.831 [2024-07-13 19:55:32.270460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.831 [2024-07-13 19:55:32.270527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.831 [2024-07-13 19:55:32.270540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.831 [2024-07-13 19:55:32.270596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.831 [2024-07-13 19:55:32.270609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.831 [2024-07-13 19:55:32.270664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:6e6e7e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.831 [2024-07-13 19:55:32.270676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.831 #16 NEW cov: 12039 ft: 13170 corp: 7/241b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:44.831 [2024-07-13 19:55:32.310436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6eeb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.831 [2024-07-13 19:55:32.310464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.831 [2024-07-13 19:55:32.310548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.831 [2024-07-13 19:55:32.310561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.831 [2024-07-13 19:55:32.310614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.831 [2024-07-13 19:55:32.310627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.831 [2024-07-13 19:55:32.310680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.831 [2024-07-13 19:55:32.310693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.831 [2024-07-13 19:55:32.310747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:6e6e7e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.831 [2024-07-13 19:55:32.310759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.831 #17 NEW cov: 12039 ft: 13219 corp: 8/281b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:07:44.831 [2024-07-13 19:55:32.350545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.831 [2024-07-13 19:55:32.350570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.832 [2024-07-13 19:55:32.350626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.832 [2024-07-13 19:55:32.350639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.832 [2024-07-13 19:55:32.350709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.832 [2024-07-13 19:55:32.350722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.832 [2024-07-13 19:55:32.350775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6a6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.832 [2024-07-13 19:55:32.350788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.832 [2024-07-13 19:55:32.350840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.832 [2024-07-13 19:55:32.350853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.832 #18 NEW cov: 12039 ft: 13271 corp: 9/321b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:07:44.832 [2024-07-13 19:55:32.390284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.832 [2024-07-13 19:55:32.390308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.832 [2024-07-13 19:55:32.390366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.832 [2024-07-13 19:55:32.390380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.832 #19 NEW cov: 12039 ft: 13897 corp: 10/343b lim: 40 exec/s: 0 rss: 70Mb L: 22/40 MS: 1 EraseBytes- 00:07:44.832 [2024-07-13 19:55:32.440332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ff28b466 cdw11:db7f7888 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.832 [2024-07-13 19:55:32.440358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.832 #25 NEW cov: 12039 ft: 14253 corp: 11/352b lim: 40 exec/s: 0 rss: 70Mb L: 9/40 MS: 1 CMP- DE: "\377(\264f\333\177x\210"- 00:07:44.832 [2024-07-13 19:55:32.480923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.832 [2024-07-13 19:55:32.480948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.832 [2024-07-13 19:55:32.481006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.832 [2024-07-13 19:55:32.481019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.832 [2024-07-13 19:55:32.481071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.832 [2024-07-13 19:55:32.481085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.832 [2024-07-13 19:55:32.481138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.832 [2024-07-13 19:55:32.481152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.832 [2024-07-13 19:55:32.481206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e86 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.832 [2024-07-13 19:55:32.481219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.091 #26 NEW cov: 12039 ft: 14337 corp: 12/392b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:07:45.091 [2024-07-13 19:55:32.520655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.520681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.091 [2024-07-13 19:55:32.520736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6a6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.520750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.091 #28 NEW cov: 12039 ft: 14373 corp: 13/412b lim: 40 exec/s: 0 rss: 70Mb L: 20/40 MS: 2 ChangeByte-CrossOver- 00:07:45.091 [2024-07-13 19:55:32.561154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:92919194 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.561179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.091 [2024-07-13 19:55:32.561249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.561268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.091 [2024-07-13 19:55:32.561322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.561335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.091 [2024-07-13 19:55:32.561389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.561402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.091 [2024-07-13 19:55:32.561458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:6e6e7e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.561471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.091 #29 NEW cov: 12039 ft: 14393 corp: 14/452b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:45.091 [2024-07-13 19:55:32.601229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e706e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.601254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.091 [2024-07-13 19:55:32.601311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.601324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.091 [2024-07-13 19:55:32.601394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.601408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.091 [2024-07-13 19:55:32.601463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.601476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.091 [2024-07-13 19:55:32.601530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2e6e7e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.601543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.091 #30 NEW cov: 12039 ft: 14429 corp: 15/492b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:45.091 [2024-07-13 19:55:32.641107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.641133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.091 [2024-07-13 19:55:32.641190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.641204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.091 [2024-07-13 19:55:32.641260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e2e6e cdw11:7e6e6e76 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.641276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.091 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:45.091 #31 NEW cov: 12062 ft: 14638 corp: 16/518b lim: 40 exec/s: 0 rss: 70Mb L: 26/40 MS: 1 EraseBytes- 00:07:45.091 [2024-07-13 19:55:32.681126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.681151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.091 [2024-07-13 19:55:32.681208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6a6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.681221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.091 #32 NEW cov: 12062 ft: 14655 corp: 17/538b lim: 40 exec/s: 0 rss: 70Mb L: 20/40 MS: 1 ShuffleBytes- 00:07:45.091 [2024-07-13 19:55:32.731230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.731255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.091 [2024-07-13 19:55:32.731311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.091 [2024-07-13 19:55:32.731324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.365 #33 NEW cov: 12062 ft: 14693 corp: 18/554b lim: 40 exec/s: 33 rss: 70Mb L: 16/40 MS: 1 EraseBytes- 00:07:45.365 [2024-07-13 19:55:32.781754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.365 [2024-07-13 19:55:32.781781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.365 [2024-07-13 19:55:32.781837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.365 [2024-07-13 19:55:32.781851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.365 [2024-07-13 19:55:32.781903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.365 [2024-07-13 19:55:32.781916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.365 [2024-07-13 19:55:32.781969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e9f6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.365 [2024-07-13 19:55:32.781982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.365 [2024-07-13 19:55:32.782033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2e6e7e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.365 [2024-07-13 19:55:32.782045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.365 #34 NEW cov: 12062 ft: 14712 corp: 19/594b lim: 40 exec/s: 34 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:07:45.365 [2024-07-13 19:55:32.831533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.365 [2024-07-13 19:55:32.831561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.365 [2024-07-13 19:55:32.831618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.365 [2024-07-13 19:55:32.831632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.365 #35 NEW cov: 12062 ft: 14726 corp: 20/615b lim: 40 exec/s: 35 rss: 70Mb L: 21/40 MS: 1 EraseBytes- 00:07:45.365 [2024-07-13 19:55:32.872000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:32.872026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.366 [2024-07-13 19:55:32.872081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:32.872095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.366 [2024-07-13 19:55:32.872166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:32.872180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.366 [2024-07-13 19:55:32.872235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:32.872248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.366 [2024-07-13 19:55:32.872304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:6e6e7e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:32.872317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.366 #36 NEW cov: 12062 ft: 14746 corp: 21/655b lim: 40 exec/s: 36 rss: 70Mb L: 40/40 MS: 1 CopyPart- 00:07:45.366 [2024-07-13 19:55:32.911746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:32.911771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.366 [2024-07-13 19:55:32.911826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:32.911839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.366 #37 NEW cov: 12062 ft: 14755 corp: 22/674b lim: 40 exec/s: 37 rss: 70Mb L: 19/40 MS: 1 EraseBytes- 00:07:45.366 [2024-07-13 19:55:32.962266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:32.962291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.366 [2024-07-13 19:55:32.962348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:32.962361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.366 [2024-07-13 19:55:32.962416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:32.962431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.366 [2024-07-13 19:55:32.962485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:32.962498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.366 [2024-07-13 19:55:32.962550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:6e6e7e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:32.962563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.366 #38 NEW cov: 12062 ft: 14756 corp: 23/714b lim: 40 exec/s: 38 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:45.366 [2024-07-13 19:55:33.002403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:33.002427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.366 [2024-07-13 19:55:33.002500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:33.002515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.366 [2024-07-13 19:55:33.002568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e766e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:33.002581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.366 [2024-07-13 19:55:33.002645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:33.002657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.366 [2024-07-13 19:55:33.002711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2e6e7e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.366 [2024-07-13 19:55:33.002724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.366 #39 NEW cov: 12062 ft: 14767 corp: 24/754b lim: 40 exec/s: 39 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:45.627 [2024-07-13 19:55:33.042261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.042285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.627 [2024-07-13 19:55:33.042357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.042370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.627 [2024-07-13 19:55:33.042423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e2e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.042435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.627 #40 NEW cov: 12062 ft: 14832 corp: 25/785b lim: 40 exec/s: 40 rss: 70Mb L: 31/40 MS: 1 EraseBytes- 00:07:45.627 [2024-07-13 19:55:33.092414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.092447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.627 [2024-07-13 19:55:33.092519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.092533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.627 [2024-07-13 19:55:33.092586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e2e6e cdw11:7e6e6e76 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.092600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.627 #41 NEW cov: 12062 ft: 14909 corp: 26/811b lim: 40 exec/s: 41 rss: 70Mb L: 26/40 MS: 1 ChangeBinInt- 00:07:45.627 [2024-07-13 19:55:33.132399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.132423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.627 [2024-07-13 19:55:33.132493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.132507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.627 #42 NEW cov: 12062 ft: 14947 corp: 27/830b lim: 40 exec/s: 42 rss: 70Mb L: 19/40 MS: 1 ChangeByte- 00:07:45.627 [2024-07-13 19:55:33.182544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.182569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.627 [2024-07-13 19:55:33.182639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6a6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.182652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.627 #43 NEW cov: 12062 ft: 14956 corp: 28/850b lim: 40 exec/s: 43 rss: 70Mb L: 20/40 MS: 1 ShuffleBytes- 00:07:45.627 [2024-07-13 19:55:33.232697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.232721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.627 [2024-07-13 19:55:33.232794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.232807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.627 #44 NEW cov: 12062 ft: 14965 corp: 29/866b lim: 40 exec/s: 44 rss: 70Mb L: 16/40 MS: 1 CrossOver- 00:07:45.627 [2024-07-13 19:55:33.283218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.283242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.627 [2024-07-13 19:55:33.283301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.283318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.627 [2024-07-13 19:55:33.283374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.283387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.627 [2024-07-13 19:55:33.283446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e9f6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.627 [2024-07-13 19:55:33.283459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.627 [2024-07-13 19:55:33.283514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2e6e7e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.628 [2024-07-13 19:55:33.283527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.887 #45 NEW cov: 12062 ft: 14976 corp: 30/906b lim: 40 exec/s: 45 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:45.887 [2024-07-13 19:55:33.322928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e78 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.322953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.323024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.323038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.887 #46 NEW cov: 12062 ft: 15013 corp: 31/925b lim: 40 exec/s: 46 rss: 70Mb L: 19/40 MS: 1 ChangeBinInt- 00:07:45.887 [2024-07-13 19:55:33.363462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:92919194 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.363486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.363570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.363584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.363640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.363653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.363708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.363721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.363777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:6e6e7e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.363790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.887 #47 NEW cov: 12062 ft: 15025 corp: 32/965b lim: 40 exec/s: 47 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:45.887 [2024-07-13 19:55:33.413195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.413222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.413294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff010000 cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.413308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.887 #48 NEW cov: 12062 ft: 15039 corp: 33/985b lim: 40 exec/s: 48 rss: 70Mb L: 20/40 MS: 1 CMP- DE: "\377\001\000\000"- 00:07:45.887 [2024-07-13 19:55:33.453446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.453472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.453544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.453558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.453614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e2e6e cdw11:7e2e6e76 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.453626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.887 #49 NEW cov: 12062 ft: 15055 corp: 34/1011b lim: 40 exec/s: 49 rss: 70Mb L: 26/40 MS: 1 ChangeBit- 00:07:45.887 [2024-07-13 19:55:33.493833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.493858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.493915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.493928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.493982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.493995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.494049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.494061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.494118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.494131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.887 #50 NEW cov: 12062 ft: 15056 corp: 35/1051b lim: 40 exec/s: 50 rss: 70Mb L: 40/40 MS: 1 CopyPart- 00:07:45.887 [2024-07-13 19:55:33.533928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:4e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.533953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.534009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.534025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.534079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.534092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.534147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.534160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.887 [2024-07-13 19:55:33.534212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:6e6e7e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.887 [2024-07-13 19:55:33.534225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:46.147 #51 NEW cov: 12062 ft: 15058 corp: 36/1091b lim: 40 exec/s: 51 rss: 70Mb L: 40/40 MS: 1 ChangeBit- 00:07:46.147 [2024-07-13 19:55:33.574017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.147 [2024-07-13 19:55:33.574041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.147 [2024-07-13 19:55:33.574115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.147 [2024-07-13 19:55:33.574128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.147 [2024-07-13 19:55:33.574186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.147 [2024-07-13 19:55:33.574199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.147 [2024-07-13 19:55:33.574255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e9f6e6e cdw11:6e6ee26e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.147 [2024-07-13 19:55:33.574268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.147 [2024-07-13 19:55:33.574324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2e6e7e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.147 [2024-07-13 19:55:33.574337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:46.147 #52 NEW cov: 12062 ft: 15066 corp: 37/1131b lim: 40 exec/s: 52 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:07:46.147 [2024-07-13 19:55:33.613902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.147 [2024-07-13 19:55:33.613928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.147 [2024-07-13 19:55:33.613986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.147 [2024-07-13 19:55:33.614000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.147 [2024-07-13 19:55:33.614056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.147 [2024-07-13 19:55:33.614072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.147 #53 NEW cov: 12062 ft: 15077 corp: 38/1159b lim: 40 exec/s: 53 rss: 70Mb L: 28/40 MS: 1 EraseBytes- 00:07:46.147 [2024-07-13 19:55:33.664313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.147 [2024-07-13 19:55:33.664338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.147 [2024-07-13 19:55:33.664413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.147 [2024-07-13 19:55:33.664427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.147 [2024-07-13 19:55:33.664484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.147 [2024-07-13 19:55:33.664498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.147 [2024-07-13 19:55:33.664553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6eee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.148 [2024-07-13 19:55:33.664566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.148 [2024-07-13 19:55:33.664621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:2e6e7e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.148 [2024-07-13 19:55:33.664635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:46.148 #54 NEW cov: 12062 ft: 15083 corp: 39/1199b lim: 40 exec/s: 54 rss: 70Mb L: 40/40 MS: 1 ChangeBit- 00:07:46.148 [2024-07-13 19:55:33.704381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:ff010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.148 [2024-07-13 19:55:33.704406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.148 [2024-07-13 19:55:33.704460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.148 [2024-07-13 19:55:33.704474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.148 [2024-07-13 19:55:33.704530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.148 [2024-07-13 19:55:33.704543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.148 [2024-07-13 19:55:33.704601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.148 [2024-07-13 19:55:33.704614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.148 [2024-07-13 19:55:33.704669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:6e6e7e6e cdw11:6e766e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.148 [2024-07-13 19:55:33.704681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:46.148 #55 NEW cov: 12062 ft: 15119 corp: 40/1239b lim: 40 exec/s: 55 rss: 70Mb L: 40/40 MS: 1 PersAutoDict- DE: "\377\001\000\000"- 00:07:46.148 [2024-07-13 19:55:33.744274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:db6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.148 [2024-07-13 19:55:33.744299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.148 [2024-07-13 19:55:33.744356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.148 [2024-07-13 19:55:33.744369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.148 [2024-07-13 19:55:33.744425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6e6e6e6e cdw11:6e6e6e6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.148 [2024-07-13 19:55:33.744438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.148 #56 NEW cov: 12062 ft: 15121 corp: 41/1266b lim: 40 exec/s: 28 rss: 70Mb L: 27/40 MS: 1 CopyPart- 00:07:46.148 #56 DONE cov: 12062 ft: 15121 corp: 41/1266b lim: 40 exec/s: 28 rss: 70Mb 00:07:46.148 ###### Recommended dictionary. ###### 00:07:46.148 "\377(\264f\333\177x\210" # Uses: 0 00:07:46.148 "\377\001\000\000" # Uses: 1 00:07:46.148 ###### End of recommended dictionary. ###### 00:07:46.148 Done 56 runs in 2 second(s) 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4414 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:46.408 19:55:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:07:46.408 [2024-07-13 19:55:33.938223] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:46.408 [2024-07-13 19:55:33.938312] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3679738 ] 00:07:46.408 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.667 [2024-07-13 19:55:34.198547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.667 [2024-07-13 19:55:34.228890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.667 [2024-07-13 19:55:34.281212] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.667 [2024-07-13 19:55:34.297534] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:46.667 INFO: Running with entropic power schedule (0xFF, 100). 00:07:46.667 INFO: Seed: 2957216807 00:07:46.926 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:46.926 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:46.926 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:46.926 INFO: A corpus is not provided, starting from an empty corpus 00:07:46.926 #2 INITED exec/s: 0 rss: 62Mb 00:07:46.926 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:46.926 This may also happen if the target rejected all inputs we tried so far 00:07:46.926 [2024-07-13 19:55:34.365351] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.926 [2024-07-13 19:55:34.365391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.926 [2024-07-13 19:55:34.365466] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.926 [2024-07-13 19:55:34.365484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.926 [2024-07-13 19:55:34.365559] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.926 [2024-07-13 19:55:34.365574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.186 NEW_FUNC[1/694]: 0x4a7a80 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:47.186 NEW_FUNC[2/694]: 0x4c8f40 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:47.186 #4 NEW cov: 11843 ft: 11844 corp: 2/29b lim: 35 exec/s: 0 rss: 68Mb L: 28/28 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:47.186 #10 NEW cov: 11975 ft: 13296 corp: 3/38b lim: 35 exec/s: 0 rss: 70Mb L: 9/28 MS: 1 CMP- DE: "\020!\0008\225\177\000\000"- 00:07:47.186 [2024-07-13 19:55:34.755591] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.186 [2024-07-13 19:55:34.755631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.186 [2024-07-13 19:55:34.755768] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.186 [2024-07-13 19:55:34.755785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.186 [2024-07-13 19:55:34.755920] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.186 [2024-07-13 19:55:34.755939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.186 #11 NEW cov: 11981 ft: 13460 corp: 4/66b lim: 35 exec/s: 0 rss: 70Mb L: 28/28 MS: 1 ChangeByte- 00:07:47.186 [2024-07-13 19:55:34.815777] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.186 [2024-07-13 19:55:34.815805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.186 [2024-07-13 19:55:34.815937] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.186 [2024-07-13 19:55:34.815956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.186 [2024-07-13 19:55:34.816097] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.186 [2024-07-13 19:55:34.816114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.445 #12 NEW cov: 12066 ft: 13667 corp: 5/94b lim: 35 exec/s: 0 rss: 70Mb L: 28/28 MS: 1 ChangeBit- 00:07:47.445 #13 NEW cov: 12066 ft: 13865 corp: 6/103b lim: 35 exec/s: 0 rss: 70Mb L: 9/28 MS: 1 ChangeByte- 00:07:47.445 [2024-07-13 19:55:34.935596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000095 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.445 [2024-07-13 19:55:34.935624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.445 #14 NEW cov: 12066 ft: 14212 corp: 7/119b lim: 35 exec/s: 0 rss: 70Mb L: 16/28 MS: 1 CrossOver- 00:07:47.445 #15 NEW cov: 12066 ft: 14294 corp: 8/128b lim: 35 exec/s: 0 rss: 70Mb L: 9/28 MS: 1 PersAutoDict- DE: "\020!\0008\225\177\000\000"- 00:07:47.445 [2024-07-13 19:55:35.046544] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.445 [2024-07-13 19:55:35.046570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.445 [2024-07-13 19:55:35.046706] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.445 [2024-07-13 19:55:35.046725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.445 [2024-07-13 19:55:35.046862] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.445 [2024-07-13 19:55:35.046880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.445 #16 NEW cov: 12066 ft: 14356 corp: 9/156b lim: 35 exec/s: 0 rss: 70Mb L: 28/28 MS: 1 ShuffleBytes- 00:07:47.445 [2024-07-13 19:55:35.096091] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000095 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.445 [2024-07-13 19:55:35.096118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.705 #17 NEW cov: 12066 ft: 14446 corp: 10/176b lim: 35 exec/s: 0 rss: 70Mb L: 20/28 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:47.705 [2024-07-13 19:55:35.155816] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:4 cdw10:00000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.705 [2024-07-13 19:55:35.155846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.705 #18 NEW cov: 12066 ft: 14542 corp: 11/185b lim: 35 exec/s: 0 rss: 70Mb L: 9/28 MS: 1 PersAutoDict- DE: "\020!\0008\225\177\000\000"- 00:07:47.705 [2024-07-13 19:55:35.206956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.705 [2024-07-13 19:55:35.206984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.705 [2024-07-13 19:55:35.207129] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.705 [2024-07-13 19:55:35.207146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.705 [2024-07-13 19:55:35.207280] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.705 [2024-07-13 19:55:35.207297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.705 #19 NEW cov: 12066 ft: 14550 corp: 12/213b lim: 35 exec/s: 0 rss: 70Mb L: 28/28 MS: 1 CopyPart- 00:07:47.705 [2024-07-13 19:55:35.257146] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.705 [2024-07-13 19:55:35.257173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.705 [2024-07-13 19:55:35.257311] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.705 [2024-07-13 19:55:35.257330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.705 [2024-07-13 19:55:35.257474] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000038 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.705 [2024-07-13 19:55:35.257510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.705 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:47.705 #20 NEW cov: 12096 ft: 14610 corp: 13/244b lim: 35 exec/s: 0 rss: 70Mb L: 31/31 MS: 1 CrossOver- 00:07:47.705 [2024-07-13 19:55:35.307582] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.705 [2024-07-13 19:55:35.307608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.705 [2024-07-13 19:55:35.307739] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.705 [2024-07-13 19:55:35.307755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.705 [2024-07-13 19:55:35.307889] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000cf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.705 [2024-07-13 19:55:35.307910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.705 [2024-07-13 19:55:35.308039] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.705 [2024-07-13 19:55:35.308057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.705 #26 NEW cov: 12096 ft: 14869 corp: 14/279b lim: 35 exec/s: 0 rss: 70Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:47.705 [2024-07-13 19:55:35.356494] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.705 [2024-07-13 19:55:35.356520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.964 #27 NEW cov: 12096 ft: 14936 corp: 15/288b lim: 35 exec/s: 27 rss: 70Mb L: 9/35 MS: 1 CMP- DE: "\000\000\377\377"- 00:07:47.964 [2024-07-13 19:55:35.416699] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:4 cdw10:00000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.964 [2024-07-13 19:55:35.416726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.964 #28 NEW cov: 12096 ft: 15017 corp: 16/297b lim: 35 exec/s: 28 rss: 70Mb L: 9/35 MS: 1 PersAutoDict- DE: "\000\000\377\377"- 00:07:47.964 [2024-07-13 19:55:35.477733] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.964 [2024-07-13 19:55:35.477760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.964 [2024-07-13 19:55:35.477892] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.964 [2024-07-13 19:55:35.477913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.964 [2024-07-13 19:55:35.478051] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.964 [2024-07-13 19:55:35.478067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.964 #29 NEW cov: 12096 ft: 15070 corp: 17/329b lim: 35 exec/s: 29 rss: 70Mb L: 32/35 MS: 1 PersAutoDict- DE: "\000\000\377\377"- 00:07:47.964 [2024-07-13 19:55:35.538013] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.964 [2024-07-13 19:55:35.538041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.964 [2024-07-13 19:55:35.538189] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.964 [2024-07-13 19:55:35.538207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.964 [2024-07-13 19:55:35.538345] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.964 [2024-07-13 19:55:35.538364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.964 #30 NEW cov: 12096 ft: 15122 corp: 18/357b lim: 35 exec/s: 30 rss: 70Mb L: 28/35 MS: 1 ChangeBinInt- 00:07:47.964 [2024-07-13 19:55:35.598229] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.964 [2024-07-13 19:55:35.598258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.964 [2024-07-13 19:55:35.598399] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.964 [2024-07-13 19:55:35.598419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.964 [2024-07-13 19:55:35.598567] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.964 [2024-07-13 19:55:35.598585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.964 #31 NEW cov: 12096 ft: 15155 corp: 19/385b lim: 35 exec/s: 31 rss: 70Mb L: 28/35 MS: 1 ChangeByte- 00:07:48.224 [2024-07-13 19:55:35.648038] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:4 cdw10:00000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.224 [2024-07-13 19:55:35.648067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.224 [2024-07-13 19:55:35.648199] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.224 [2024-07-13 19:55:35.648216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.224 [2024-07-13 19:55:35.648354] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.224 [2024-07-13 19:55:35.648371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.224 #32 NEW cov: 12096 ft: 15235 corp: 20/410b lim: 35 exec/s: 32 rss: 70Mb L: 25/35 MS: 1 CrossOver- 00:07:48.224 [2024-07-13 19:55:35.708581] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.224 [2024-07-13 19:55:35.708608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.224 [2024-07-13 19:55:35.708742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.224 [2024-07-13 19:55:35.708760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.224 #33 NEW cov: 12096 ft: 15378 corp: 21/443b lim: 35 exec/s: 33 rss: 70Mb L: 33/35 MS: 1 CrossOver- 00:07:48.224 [2024-07-13 19:55:35.768629] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.224 [2024-07-13 19:55:35.768666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.224 [2024-07-13 19:55:35.768833] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.224 [2024-07-13 19:55:35.768851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.224 [2024-07-13 19:55:35.768988] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.224 [2024-07-13 19:55:35.769006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.224 [2024-07-13 19:55:35.769146] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.224 [2024-07-13 19:55:35.769163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.224 #34 NEW cov: 12096 ft: 15432 corp: 22/471b lim: 35 exec/s: 34 rss: 71Mb L: 28/35 MS: 1 ChangeByte- 00:07:48.224 [2024-07-13 19:55:35.827926] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:4 cdw10:00000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.224 [2024-07-13 19:55:35.827955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.224 #37 NEW cov: 12096 ft: 15453 corp: 23/482b lim: 35 exec/s: 37 rss: 71Mb L: 11/35 MS: 3 CopyPart-PersAutoDict-CrossOver- DE: "\377\377\377\377"- 00:07:48.224 [2024-07-13 19:55:35.878062] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.224 [2024-07-13 19:55:35.878090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.483 #38 NEW cov: 12096 ft: 15459 corp: 24/490b lim: 35 exec/s: 38 rss: 71Mb L: 8/35 MS: 1 EraseBytes- 00:07:48.483 [2024-07-13 19:55:35.939477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.483 [2024-07-13 19:55:35.939507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.483 [2024-07-13 19:55:35.939656] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.483 [2024-07-13 19:55:35.939674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.483 [2024-07-13 19:55:35.939797] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000cf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.483 [2024-07-13 19:55:35.939822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.483 [2024-07-13 19:55:35.939957] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.483 [2024-07-13 19:55:35.939975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:48.483 #39 NEW cov: 12096 ft: 15480 corp: 25/525b lim: 35 exec/s: 39 rss: 71Mb L: 35/35 MS: 1 CrossOver- 00:07:48.483 [2024-07-13 19:55:35.998449] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:4 cdw10:00000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.483 [2024-07-13 19:55:35.998478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.483 #40 NEW cov: 12096 ft: 15492 corp: 26/536b lim: 35 exec/s: 40 rss: 71Mb L: 11/35 MS: 1 ChangeBit- 00:07:48.483 [2024-07-13 19:55:36.058725] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:4 cdw10:00000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.483 [2024-07-13 19:55:36.058751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.483 #41 NEW cov: 12096 ft: 15501 corp: 27/547b lim: 35 exec/s: 41 rss: 71Mb L: 11/35 MS: 1 ChangeBinInt- 00:07:48.483 [2024-07-13 19:55:36.118864] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST CONTROLLED THERMAL MANAGEMENT cid:4 cdw10:00000010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.483 [2024-07-13 19:55:36.118892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.483 #42 NEW cov: 12096 ft: 15535 corp: 28/555b lim: 35 exec/s: 42 rss: 71Mb L: 8/35 MS: 1 EraseBytes- 00:07:48.743 [2024-07-13 19:55:36.169907] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.743 [2024-07-13 19:55:36.169936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.743 [2024-07-13 19:55:36.170082] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.743 [2024-07-13 19:55:36.170099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.743 [2024-07-13 19:55:36.170242] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.743 [2024-07-13 19:55:36.170261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.743 #43 NEW cov: 12096 ft: 15553 corp: 29/584b lim: 35 exec/s: 43 rss: 71Mb L: 29/35 MS: 1 InsertByte- 00:07:48.743 [2024-07-13 19:55:36.219154] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.743 [2024-07-13 19:55:36.219182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.743 #44 NEW cov: 12096 ft: 15563 corp: 30/593b lim: 35 exec/s: 44 rss: 71Mb L: 9/35 MS: 1 PersAutoDict- DE: "\000\000\377\377"- 00:07:48.743 [2024-07-13 19:55:36.269954] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.743 [2024-07-13 19:55:36.269982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.743 [2024-07-13 19:55:36.270125] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.743 [2024-07-13 19:55:36.270142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.743 #45 NEW cov: 12096 ft: 15574 corp: 31/618b lim: 35 exec/s: 45 rss: 71Mb L: 25/35 MS: 1 EraseBytes- 00:07:48.743 [2024-07-13 19:55:36.320469] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.743 [2024-07-13 19:55:36.320497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.743 [2024-07-13 19:55:36.320642] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.743 [2024-07-13 19:55:36.320661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:48.743 [2024-07-13 19:55:36.320805] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.743 [2024-07-13 19:55:36.320822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:48.743 #46 NEW cov: 12096 ft: 15591 corp: 32/646b lim: 35 exec/s: 23 rss: 71Mb L: 28/35 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:07:48.743 #46 DONE cov: 12096 ft: 15591 corp: 32/646b lim: 35 exec/s: 23 rss: 71Mb 00:07:48.743 ###### Recommended dictionary. ###### 00:07:48.743 "\020!\0008\225\177\000\000" # Uses: 2 00:07:48.743 "\377\377\377\377" # Uses: 2 00:07:48.743 "\000\000\377\377" # Uses: 3 00:07:48.743 ###### End of recommended dictionary. ###### 00:07:48.743 Done 46 runs in 2 second(s) 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4415 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:49.001 19:55:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:07:49.001 [2024-07-13 19:55:36.512630] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:49.001 [2024-07-13 19:55:36.512697] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3680272 ] 00:07:49.001 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.260 [2024-07-13 19:55:36.765613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.260 [2024-07-13 19:55:36.796620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.260 [2024-07-13 19:55:36.848728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.260 [2024-07-13 19:55:36.865037] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:49.260 INFO: Running with entropic power schedule (0xFF, 100). 00:07:49.260 INFO: Seed: 1229258858 00:07:49.260 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:49.260 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:49.260 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:49.260 INFO: A corpus is not provided, starting from an empty corpus 00:07:49.260 #2 INITED exec/s: 0 rss: 62Mb 00:07:49.260 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:49.260 This may also happen if the target rejected all inputs we tried so far 00:07:49.519 [2024-07-13 19:55:36.930257] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000024b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.519 [2024-07-13 19:55:36.930285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.777 NEW_FUNC[1/691]: 0x4a8fc0 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:49.777 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:49.777 #5 NEW cov: 11800 ft: 11801 corp: 2/8b lim: 35 exec/s: 0 rss: 69Mb L: 7/7 MS: 3 InsertRepeatedBytes-ChangeBinInt-InsertByte- 00:07:49.777 [2024-07-13 19:55:37.261222] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000014b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.777 [2024-07-13 19:55:37.261280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.777 #6 NEW cov: 11930 ft: 12497 corp: 3/15b lim: 35 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 ChangeByte- 00:07:49.777 [2024-07-13 19:55:37.321150] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000024b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.777 [2024-07-13 19:55:37.321176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.777 #12 NEW cov: 11936 ft: 12776 corp: 4/22b lim: 35 exec/s: 0 rss: 70Mb L: 7/7 MS: 1 ChangeBinInt- 00:07:49.777 [2024-07-13 19:55:37.361244] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.777 [2024-07-13 19:55:37.361269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.777 #13 NEW cov: 12021 ft: 12988 corp: 5/34b lim: 35 exec/s: 0 rss: 70Mb L: 12/12 MS: 1 InsertRepeatedBytes- 00:07:49.777 [2024-07-13 19:55:37.401391] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000795 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.777 [2024-07-13 19:55:37.401419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.777 #18 NEW cov: 12021 ft: 13097 corp: 6/41b lim: 35 exec/s: 0 rss: 70Mb L: 7/12 MS: 5 EraseBytes-ShuffleBytes-CMP-CopyPart-InsertByte- DE: "\377\013"- 00:07:50.036 [2024-07-13 19:55:37.451517] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000024b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.036 [2024-07-13 19:55:37.451544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.036 #19 NEW cov: 12021 ft: 13278 corp: 7/48b lim: 35 exec/s: 0 rss: 70Mb L: 7/12 MS: 1 ChangeByte- 00:07:50.036 [2024-07-13 19:55:37.502004] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000074b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.036 [2024-07-13 19:55:37.502030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.036 [2024-07-13 19:55:37.502090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.036 [2024-07-13 19:55:37.502103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.036 [2024-07-13 19:55:37.502158] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.036 [2024-07-13 19:55:37.502174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.036 [2024-07-13 19:55:37.502231] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.036 [2024-07-13 19:55:37.502243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.036 #20 NEW cov: 12021 ft: 13952 corp: 8/82b lim: 35 exec/s: 0 rss: 70Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:50.036 [2024-07-13 19:55:37.541756] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000024b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.036 [2024-07-13 19:55:37.541781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.036 #21 NEW cov: 12021 ft: 13978 corp: 9/89b lim: 35 exec/s: 0 rss: 70Mb L: 7/34 MS: 1 CopyPart- 00:07:50.036 [2024-07-13 19:55:37.581870] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000004b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.036 [2024-07-13 19:55:37.581894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.036 #22 NEW cov: 12021 ft: 14068 corp: 10/96b lim: 35 exec/s: 0 rss: 70Mb L: 7/34 MS: 1 ShuffleBytes- 00:07:50.036 [2024-07-13 19:55:37.622013] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000024b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.036 [2024-07-13 19:55:37.622037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.036 #23 NEW cov: 12021 ft: 14125 corp: 11/104b lim: 35 exec/s: 0 rss: 70Mb L: 8/34 MS: 1 CopyPart- 00:07:50.036 [2024-07-13 19:55:37.662091] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000024b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.036 [2024-07-13 19:55:37.662115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.036 #24 NEW cov: 12021 ft: 14136 corp: 12/112b lim: 35 exec/s: 0 rss: 70Mb L: 8/34 MS: 1 InsertByte- 00:07:50.294 NEW_FUNC[1/1]: 0x4c8f40 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:50.294 #25 NEW cov: 12035 ft: 14192 corp: 13/124b lim: 35 exec/s: 0 rss: 70Mb L: 12/34 MS: 1 InsertRepeatedBytes- 00:07:50.294 [2024-07-13 19:55:37.742327] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.294 [2024-07-13 19:55:37.742352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.294 #26 NEW cov: 12035 ft: 14260 corp: 14/134b lim: 35 exec/s: 0 rss: 70Mb L: 10/34 MS: 1 EraseBytes- 00:07:50.294 [2024-07-13 19:55:37.792449] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000074b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.294 [2024-07-13 19:55:37.792473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.294 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:50.294 #30 NEW cov: 12058 ft: 14302 corp: 15/141b lim: 35 exec/s: 0 rss: 70Mb L: 7/34 MS: 4 EraseBytes-EraseBytes-EraseBytes-InsertRepeatedBytes- 00:07:50.294 [2024-07-13 19:55:37.832552] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.294 [2024-07-13 19:55:37.832576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.294 #31 NEW cov: 12058 ft: 14319 corp: 16/151b lim: 35 exec/s: 0 rss: 70Mb L: 10/34 MS: 1 ChangeBinInt- 00:07:50.294 [2024-07-13 19:55:37.882680] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000014b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.294 [2024-07-13 19:55:37.882706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.294 #32 NEW cov: 12058 ft: 14340 corp: 17/158b lim: 35 exec/s: 0 rss: 70Mb L: 7/34 MS: 1 ChangeByte- 00:07:50.294 [2024-07-13 19:55:37.923079] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.294 [2024-07-13 19:55:37.923103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.294 [2024-07-13 19:55:37.923157] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.294 [2024-07-13 19:55:37.923169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.553 #33 NEW cov: 12058 ft: 14557 corp: 18/184b lim: 35 exec/s: 33 rss: 70Mb L: 26/34 MS: 1 InsertRepeatedBytes- 00:07:50.553 [2024-07-13 19:55:37.973332] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000074b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.553 [2024-07-13 19:55:37.973357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.553 [2024-07-13 19:55:37.973416] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.553 [2024-07-13 19:55:37.973429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.553 [2024-07-13 19:55:37.973485] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.553 [2024-07-13 19:55:37.973498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.553 [2024-07-13 19:55:37.973554] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.553 [2024-07-13 19:55:37.973567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.553 #39 NEW cov: 12058 ft: 14630 corp: 19/218b lim: 35 exec/s: 39 rss: 70Mb L: 34/34 MS: 1 ChangeBinInt- 00:07:50.553 [2024-07-13 19:55:38.023085] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000024b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.553 [2024-07-13 19:55:38.023109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.553 #40 NEW cov: 12058 ft: 14657 corp: 20/227b lim: 35 exec/s: 40 rss: 70Mb L: 9/34 MS: 1 CMP- DE: "\000\006"- 00:07:50.553 [2024-07-13 19:55:38.053275] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000014b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.553 [2024-07-13 19:55:38.053299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.553 [2024-07-13 19:55:38.053353] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000024b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.553 [2024-07-13 19:55:38.053366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.553 #41 NEW cov: 12058 ft: 14744 corp: 21/241b lim: 35 exec/s: 41 rss: 70Mb L: 14/34 MS: 1 CrossOver- 00:07:50.553 [2024-07-13 19:55:38.093282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000024b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.553 [2024-07-13 19:55:38.093306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.553 #42 NEW cov: 12058 ft: 14769 corp: 22/250b lim: 35 exec/s: 42 rss: 70Mb L: 9/34 MS: 1 CrossOver- 00:07:50.553 [2024-07-13 19:55:38.143762] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000074b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.553 [2024-07-13 19:55:38.143786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.553 [2024-07-13 19:55:38.143843] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.553 [2024-07-13 19:55:38.143857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.553 [2024-07-13 19:55:38.143912] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.553 [2024-07-13 19:55:38.143925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.553 [2024-07-13 19:55:38.143980] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.553 [2024-07-13 19:55:38.143993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.553 #43 NEW cov: 12058 ft: 14782 corp: 23/284b lim: 35 exec/s: 43 rss: 70Mb L: 34/34 MS: 1 ChangeBinInt- 00:07:50.553 [2024-07-13 19:55:38.183569] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000024b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.553 [2024-07-13 19:55:38.183593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.812 #44 NEW cov: 12058 ft: 14813 corp: 24/293b lim: 35 exec/s: 44 rss: 70Mb L: 9/34 MS: 1 PersAutoDict- DE: "\000\006"- 00:07:50.812 [2024-07-13 19:55:38.233676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.812 [2024-07-13 19:55:38.233701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.812 #45 NEW cov: 12058 ft: 14831 corp: 25/303b lim: 35 exec/s: 45 rss: 70Mb L: 10/34 MS: 1 ChangeBit- 00:07:50.812 [2024-07-13 19:55:38.273993] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000074b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.812 [2024-07-13 19:55:38.274016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.812 [2024-07-13 19:55:38.274072] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.812 [2024-07-13 19:55:38.274085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.812 [2024-07-13 19:55:38.274142] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.812 [2024-07-13 19:55:38.274156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.812 #46 NEW cov: 12058 ft: 14940 corp: 26/325b lim: 35 exec/s: 46 rss: 70Mb L: 22/34 MS: 1 EraseBytes- 00:07:50.812 [2024-07-13 19:55:38.324043] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000024b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.812 [2024-07-13 19:55:38.324067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.812 [2024-07-13 19:55:38.324124] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.812 [2024-07-13 19:55:38.324138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.812 #47 NEW cov: 12058 ft: 14960 corp: 27/341b lim: 35 exec/s: 47 rss: 71Mb L: 16/34 MS: 1 CrossOver- 00:07:50.812 [2024-07-13 19:55:38.374076] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000014b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.812 [2024-07-13 19:55:38.374102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.812 #48 NEW cov: 12058 ft: 14976 corp: 28/350b lim: 35 exec/s: 48 rss: 71Mb L: 9/34 MS: 1 PersAutoDict- DE: "\000\006"- 00:07:50.812 [2024-07-13 19:55:38.414450] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.812 [2024-07-13 19:55:38.414474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.812 [2024-07-13 19:55:38.414547] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.812 [2024-07-13 19:55:38.414562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.812 #49 NEW cov: 12058 ft: 14986 corp: 29/377b lim: 35 exec/s: 49 rss: 71Mb L: 27/34 MS: 1 CrossOver- 00:07:50.812 [2024-07-13 19:55:38.464649] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000036c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.812 [2024-07-13 19:55:38.464673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.812 [2024-07-13 19:55:38.464733] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.812 [2024-07-13 19:55:38.464746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.071 #50 NEW cov: 12058 ft: 14995 corp: 30/399b lim: 35 exec/s: 50 rss: 71Mb L: 22/34 MS: 1 CrossOver- 00:07:51.071 [2024-07-13 19:55:38.504471] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.071 [2024-07-13 19:55:38.504496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.071 #51 NEW cov: 12058 ft: 15035 corp: 31/409b lim: 35 exec/s: 51 rss: 71Mb L: 10/34 MS: 1 ChangeBit- 00:07:51.071 [2024-07-13 19:55:38.554580] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000074b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.071 [2024-07-13 19:55:38.554605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.071 #52 NEW cov: 12058 ft: 15087 corp: 32/416b lim: 35 exec/s: 52 rss: 71Mb L: 7/34 MS: 1 ChangeBinInt- 00:07:51.071 [2024-07-13 19:55:38.604990] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000024b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.071 [2024-07-13 19:55:38.605016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.071 [2024-07-13 19:55:38.605071] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.071 [2024-07-13 19:55:38.605084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.071 [2024-07-13 19:55:38.605157] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.071 [2024-07-13 19:55:38.605170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.071 #55 NEW cov: 12058 ft: 15093 corp: 33/441b lim: 35 exec/s: 55 rss: 71Mb L: 25/34 MS: 3 EraseBytes-InsertByte-InsertRepeatedBytes- 00:07:51.071 [2024-07-13 19:55:38.644811] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.071 [2024-07-13 19:55:38.644836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.071 #56 NEW cov: 12058 ft: 15097 corp: 34/451b lim: 35 exec/s: 56 rss: 71Mb L: 10/34 MS: 1 ChangeBit- 00:07:51.071 [2024-07-13 19:55:38.685023] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000014b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.071 [2024-07-13 19:55:38.685048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.071 [2024-07-13 19:55:38.685104] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.071 [2024-07-13 19:55:38.685117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.071 #57 NEW cov: 12058 ft: 15098 corp: 35/471b lim: 35 exec/s: 57 rss: 71Mb L: 20/34 MS: 1 InsertRepeatedBytes- 00:07:51.331 [2024-07-13 19:55:38.735208] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000014b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.331 [2024-07-13 19:55:38.735234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.331 [2024-07-13 19:55:38.735290] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000024b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.331 [2024-07-13 19:55:38.735303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.331 #58 NEW cov: 12058 ft: 15104 corp: 36/485b lim: 35 exec/s: 58 rss: 71Mb L: 14/34 MS: 1 ChangeByte- 00:07:51.331 [2024-07-13 19:55:38.785332] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000014b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.331 [2024-07-13 19:55:38.785357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.331 [2024-07-13 19:55:38.785410] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005b4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.331 [2024-07-13 19:55:38.785424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.331 #59 NEW cov: 12058 ft: 15109 corp: 37/499b lim: 35 exec/s: 59 rss: 71Mb L: 14/34 MS: 1 ChangeBinInt- 00:07:51.331 [2024-07-13 19:55:38.825310] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000025d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.331 [2024-07-13 19:55:38.825335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.331 #60 NEW cov: 12058 ft: 15127 corp: 38/507b lim: 35 exec/s: 60 rss: 71Mb L: 8/34 MS: 1 InsertByte- 00:07:51.331 [2024-07-13 19:55:38.875833] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000074b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.331 [2024-07-13 19:55:38.875858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.331 [2024-07-13 19:55:38.875910] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.331 [2024-07-13 19:55:38.875924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.331 [2024-07-13 19:55:38.875977] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.331 [2024-07-13 19:55:38.876006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.331 [2024-07-13 19:55:38.876062] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.331 [2024-07-13 19:55:38.876075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.331 #61 NEW cov: 12058 ft: 15202 corp: 39/541b lim: 35 exec/s: 61 rss: 71Mb L: 34/34 MS: 1 ChangeBinInt- 00:07:51.331 [2024-07-13 19:55:38.915718] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000025d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.331 [2024-07-13 19:55:38.915743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.331 [2024-07-13 19:55:38.915800] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.331 [2024-07-13 19:55:38.915814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.331 #62 NEW cov: 12058 ft: 15257 corp: 40/557b lim: 35 exec/s: 31 rss: 72Mb L: 16/34 MS: 1 InsertRepeatedBytes- 00:07:51.331 #62 DONE cov: 12058 ft: 15257 corp: 40/557b lim: 35 exec/s: 31 rss: 72Mb 00:07:51.331 ###### Recommended dictionary. ###### 00:07:51.331 "\377\013" # Uses: 1 00:07:51.331 "\000\006" # Uses: 2 00:07:51.331 ###### End of recommended dictionary. ###### 00:07:51.331 Done 62 runs in 2 second(s) 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4416 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:51.591 19:55:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:07:51.591 [2024-07-13 19:55:39.103561] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:51.591 [2024-07-13 19:55:39.103634] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3680807 ] 00:07:51.591 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.850 [2024-07-13 19:55:39.355023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.850 [2024-07-13 19:55:39.385860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.850 [2024-07-13 19:55:39.437948] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.850 [2024-07-13 19:55:39.454263] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:51.850 INFO: Running with entropic power schedule (0xFF, 100). 00:07:51.850 INFO: Seed: 3819251001 00:07:51.850 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:51.850 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:51.850 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:51.850 INFO: A corpus is not provided, starting from an empty corpus 00:07:51.850 #2 INITED exec/s: 0 rss: 62Mb 00:07:51.850 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:51.850 This may also happen if the target rejected all inputs we tried so far 00:07:52.109 [2024-07-13 19:55:39.530927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.109 [2024-07-13 19:55:39.530971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.109 [2024-07-13 19:55:39.531035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.109 [2024-07-13 19:55:39.531054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.412 NEW_FUNC[1/692]: 0x4aa470 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:52.412 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:52.412 #15 NEW cov: 11888 ft: 11888 corp: 2/46b lim: 105 exec/s: 0 rss: 69Mb L: 45/45 MS: 3 ChangeByte-CopyPart-InsertRepeatedBytes- 00:07:52.412 [2024-07-13 19:55:39.871034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.412 [2024-07-13 19:55:39.871070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.412 [2024-07-13 19:55:39.871203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.412 [2024-07-13 19:55:39.871224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.412 #16 NEW cov: 12034 ft: 12519 corp: 3/92b lim: 105 exec/s: 0 rss: 70Mb L: 46/46 MS: 1 InsertByte- 00:07:52.412 [2024-07-13 19:55:39.921121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.412 [2024-07-13 19:55:39.921154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.412 [2024-07-13 19:55:39.921269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.412 [2024-07-13 19:55:39.921292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.412 #17 NEW cov: 12040 ft: 12774 corp: 4/138b lim: 105 exec/s: 0 rss: 70Mb L: 46/46 MS: 1 ChangeByte- 00:07:52.412 [2024-07-13 19:55:39.971272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.412 [2024-07-13 19:55:39.971303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.412 [2024-07-13 19:55:39.971421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.412 [2024-07-13 19:55:39.971448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.412 #23 NEW cov: 12125 ft: 13014 corp: 5/185b lim: 105 exec/s: 0 rss: 70Mb L: 47/47 MS: 1 CrossOver- 00:07:52.412 [2024-07-13 19:55:40.031591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.412 [2024-07-13 19:55:40.031628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.412 [2024-07-13 19:55:40.031758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.412 [2024-07-13 19:55:40.031778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.412 #24 NEW cov: 12125 ft: 13267 corp: 6/231b lim: 105 exec/s: 0 rss: 70Mb L: 46/47 MS: 1 ChangeBit- 00:07:52.719 [2024-07-13 19:55:40.081963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.719 [2024-07-13 19:55:40.082001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.719 [2024-07-13 19:55:40.082125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:7016996765293437281 len:24930 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.719 [2024-07-13 19:55:40.082153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.719 [2024-07-13 19:55:40.082285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:7016996765293437281 len:24930 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.719 [2024-07-13 19:55:40.082313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.719 #25 NEW cov: 12125 ft: 13640 corp: 7/314b lim: 105 exec/s: 0 rss: 70Mb L: 83/83 MS: 1 InsertRepeatedBytes- 00:07:52.719 [2024-07-13 19:55:40.131875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.719 [2024-07-13 19:55:40.131909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.719 [2024-07-13 19:55:40.132028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.719 [2024-07-13 19:55:40.132054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.719 #26 NEW cov: 12125 ft: 13774 corp: 8/360b lim: 105 exec/s: 0 rss: 70Mb L: 46/83 MS: 1 ChangeBit- 00:07:52.719 [2024-07-13 19:55:40.191991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2801795072 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.719 [2024-07-13 19:55:40.192024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.719 [2024-07-13 19:55:40.192140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.719 [2024-07-13 19:55:40.192168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.719 #27 NEW cov: 12125 ft: 13829 corp: 9/406b lim: 105 exec/s: 0 rss: 70Mb L: 46/83 MS: 1 ChangeBinInt- 00:07:52.719 [2024-07-13 19:55:40.242453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:436207616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.719 [2024-07-13 19:55:40.242486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.719 [2024-07-13 19:55:40.242557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.719 [2024-07-13 19:55:40.242585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.719 [2024-07-13 19:55:40.242715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.719 [2024-07-13 19:55:40.242740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.719 #29 NEW cov: 12125 ft: 13872 corp: 10/473b lim: 105 exec/s: 0 rss: 70Mb L: 67/83 MS: 2 ChangeBit-InsertRepeatedBytes- 00:07:52.719 [2024-07-13 19:55:40.282323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.719 [2024-07-13 19:55:40.282351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.719 [2024-07-13 19:55:40.282495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.719 [2024-07-13 19:55:40.282522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.719 #30 NEW cov: 12125 ft: 14037 corp: 11/519b lim: 105 exec/s: 0 rss: 70Mb L: 46/83 MS: 1 ChangeASCIIInt- 00:07:52.719 [2024-07-13 19:55:40.332669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:436207616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.719 [2024-07-13 19:55:40.332703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.719 [2024-07-13 19:55:40.332832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.719 [2024-07-13 19:55:40.332858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.719 [2024-07-13 19:55:40.332998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.719 [2024-07-13 19:55:40.333026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.719 #31 NEW cov: 12125 ft: 14055 corp: 12/596b lim: 105 exec/s: 0 rss: 70Mb L: 77/83 MS: 1 CrossOver- 00:07:52.979 [2024-07-13 19:55:40.392603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.979 [2024-07-13 19:55:40.392642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.979 [2024-07-13 19:55:40.392768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808861146064807 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.979 [2024-07-13 19:55:40.392790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.979 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:52.979 #32 NEW cov: 12148 ft: 14112 corp: 13/642b lim: 105 exec/s: 0 rss: 70Mb L: 46/83 MS: 1 CMP- DE: ";\247\324\023\000\000\000\000"- 00:07:52.979 [2024-07-13 19:55:40.433181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:436207616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.979 [2024-07-13 19:55:40.433215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.979 [2024-07-13 19:55:40.433334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.979 [2024-07-13 19:55:40.433355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.979 [2024-07-13 19:55:40.433493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.979 [2024-07-13 19:55:40.433517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.979 [2024-07-13 19:55:40.433638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13889313184910721216 len:49345 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.979 [2024-07-13 19:55:40.433663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.979 #33 NEW cov: 12148 ft: 14609 corp: 14/744b lim: 105 exec/s: 0 rss: 70Mb L: 102/102 MS: 1 InsertRepeatedBytes- 00:07:52.979 [2024-07-13 19:55:40.472461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12083342138749200295 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.979 [2024-07-13 19:55:40.472497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.979 [2024-07-13 19:55:40.472620] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.979 [2024-07-13 19:55:40.472649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.979 #34 NEW cov: 12148 ft: 14667 corp: 15/791b lim: 105 exec/s: 0 rss: 70Mb L: 47/102 MS: 1 InsertByte- 00:07:52.979 [2024-07-13 19:55:40.523040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.979 [2024-07-13 19:55:40.523075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.979 [2024-07-13 19:55:40.523224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42965 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.979 [2024-07-13 19:55:40.523245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.979 #35 NEW cov: 12148 ft: 14680 corp: 16/845b lim: 105 exec/s: 35 rss: 70Mb L: 54/102 MS: 1 PersAutoDict- DE: ";\247\324\023\000\000\000\000"- 00:07:52.979 [2024-07-13 19:55:40.573240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.979 [2024-07-13 19:55:40.573274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.979 [2024-07-13 19:55:40.573407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:54292 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.979 [2024-07-13 19:55:40.573434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.979 #36 NEW cov: 12148 ft: 14710 corp: 17/891b lim: 105 exec/s: 36 rss: 70Mb L: 46/102 MS: 1 PersAutoDict- DE: ";\247\324\023\000\000\000\000"- 00:07:52.979 [2024-07-13 19:55:40.612931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.979 [2024-07-13 19:55:40.612967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.979 [2024-07-13 19:55:40.613096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.979 [2024-07-13 19:55:40.613123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.979 #37 NEW cov: 12148 ft: 14729 corp: 18/937b lim: 105 exec/s: 37 rss: 70Mb L: 46/102 MS: 1 ChangeBinInt- 00:07:53.239 [2024-07-13 19:55:40.663516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.663552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.239 [2024-07-13 19:55:40.663687] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808861146057895 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.663708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.239 #38 NEW cov: 12148 ft: 14758 corp: 19/984b lim: 105 exec/s: 38 rss: 70Mb L: 47/102 MS: 1 InsertByte- 00:07:53.239 [2024-07-13 19:55:40.713707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.713741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.239 [2024-07-13 19:55:40.713852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:7016996765293437281 len:24930 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.713876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.239 [2024-07-13 19:55:40.713998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:7016996765293437281 len:24930 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.714021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.239 #39 NEW cov: 12148 ft: 14769 corp: 20/1067b lim: 105 exec/s: 39 rss: 70Mb L: 83/102 MS: 1 ChangeByte- 00:07:53.239 [2024-07-13 19:55:40.774143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.774174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.239 [2024-07-13 19:55:40.774262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:767766600004118322 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.774288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.239 [2024-07-13 19:55:40.774414] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12080808451641943975 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.774437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.239 [2024-07-13 19:55:40.774567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.774589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.239 #40 NEW cov: 12148 ft: 14842 corp: 21/1154b lim: 105 exec/s: 40 rss: 70Mb L: 87/102 MS: 1 CopyPart- 00:07:53.239 [2024-07-13 19:55:40.813898] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1000854547 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.813925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.239 [2024-07-13 19:55:40.814066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808861985580967 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.814087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.239 #41 NEW cov: 12148 ft: 14853 corp: 22/1209b lim: 105 exec/s: 41 rss: 70Mb L: 55/102 MS: 1 PersAutoDict- DE: ";\247\324\023\000\000\000\000"- 00:07:53.239 [2024-07-13 19:55:40.854326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.854357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.239 [2024-07-13 19:55:40.854450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:7016996765293437281 len:24930 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.854471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.239 [2024-07-13 19:55:40.854603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:7016996765293437281 len:24930 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.854624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.239 [2024-07-13 19:55:40.854744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.854766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.239 #42 NEW cov: 12148 ft: 14863 corp: 23/1293b lim: 105 exec/s: 42 rss: 70Mb L: 84/102 MS: 1 InsertByte- 00:07:53.239 [2024-07-13 19:55:40.893754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.893791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.239 [2024-07-13 19:55:40.893921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.239 [2024-07-13 19:55:40.893950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.499 #43 NEW cov: 12148 ft: 14868 corp: 24/1340b lim: 105 exec/s: 43 rss: 70Mb L: 47/102 MS: 1 InsertByte- 00:07:53.499 [2024-07-13 19:55:40.943964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.499 [2024-07-13 19:55:40.943997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.499 [2024-07-13 19:55:40.944123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.499 [2024-07-13 19:55:40.944148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.499 #44 NEW cov: 12148 ft: 14914 corp: 25/1387b lim: 105 exec/s: 44 rss: 70Mb L: 47/102 MS: 1 CrossOver- 00:07:53.499 [2024-07-13 19:55:40.994586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.499 [2024-07-13 19:55:40.994619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.499 [2024-07-13 19:55:40.994718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080731597497149351 len:24930 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.499 [2024-07-13 19:55:40.994740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.499 [2024-07-13 19:55:40.994862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:7016996765293437281 len:24930 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.499 [2024-07-13 19:55:40.994886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.499 #45 NEW cov: 12148 ft: 14930 corp: 26/1470b lim: 105 exec/s: 45 rss: 70Mb L: 83/102 MS: 1 CopyPart- 00:07:53.499 [2024-07-13 19:55:41.034468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.499 [2024-07-13 19:55:41.034500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.499 [2024-07-13 19:55:41.034611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42965 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.499 [2024-07-13 19:55:41.034638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.499 #46 NEW cov: 12148 ft: 14959 corp: 27/1528b lim: 105 exec/s: 46 rss: 70Mb L: 58/102 MS: 1 CMP- DE: "\004\000\000\000"- 00:07:53.499 [2024-07-13 19:55:41.084584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.499 [2024-07-13 19:55:41.084619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.499 [2024-07-13 19:55:41.084735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.499 [2024-07-13 19:55:41.084757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.499 #47 NEW cov: 12148 ft: 14986 corp: 28/1574b lim: 105 exec/s: 47 rss: 70Mb L: 46/102 MS: 1 ShuffleBytes- 00:07:53.499 [2024-07-13 19:55:41.135204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.499 [2024-07-13 19:55:41.135235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.499 [2024-07-13 19:55:41.135336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080731597497149351 len:24930 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.499 [2024-07-13 19:55:41.135364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.499 [2024-07-13 19:55:41.135501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2837656311089800999 len:24930 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.499 [2024-07-13 19:55:41.135522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.499 [2024-07-13 19:55:41.135639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.499 [2024-07-13 19:55:41.135661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.758 #48 NEW cov: 12148 ft: 15015 corp: 29/1661b lim: 105 exec/s: 48 rss: 71Mb L: 87/102 MS: 1 InsertRepeatedBytes- 00:07:53.758 [2024-07-13 19:55:41.184985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2801795072 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.758 [2024-07-13 19:55:41.185017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.759 [2024-07-13 19:55:41.185142] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863948122023 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.759 [2024-07-13 19:55:41.185167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.759 #49 NEW cov: 12148 ft: 15019 corp: 30/1707b lim: 105 exec/s: 49 rss: 71Mb L: 46/102 MS: 1 ChangeByte- 00:07:53.759 [2024-07-13 19:55:41.235144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.759 [2024-07-13 19:55:41.235174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.759 [2024-07-13 19:55:41.235302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.759 [2024-07-13 19:55:41.235321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.759 #50 NEW cov: 12148 ft: 15040 corp: 31/1753b lim: 105 exec/s: 50 rss: 71Mb L: 46/102 MS: 1 CopyPart- 00:07:53.759 [2024-07-13 19:55:41.295370] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.759 [2024-07-13 19:55:41.295404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.759 [2024-07-13 19:55:41.295542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:54332 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.759 [2024-07-13 19:55:41.295567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.759 #51 NEW cov: 12148 ft: 15047 corp: 32/1807b lim: 105 exec/s: 51 rss: 71Mb L: 54/102 MS: 1 PersAutoDict- DE: ";\247\324\023\000\000\000\000"- 00:07:53.759 [2024-07-13 19:55:41.356096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12083342138749200295 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.759 [2024-07-13 19:55:41.356129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.759 [2024-07-13 19:55:41.356211] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.759 [2024-07-13 19:55:41.356232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.759 [2024-07-13 19:55:41.356359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.759 [2024-07-13 19:55:41.356382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.759 [2024-07-13 19:55:41.356504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.759 [2024-07-13 19:55:41.356527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.759 [2024-07-13 19:55:41.356647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.759 [2024-07-13 19:55:41.356669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:53.759 #52 NEW cov: 12148 ft: 15077 corp: 33/1912b lim: 105 exec/s: 52 rss: 71Mb L: 105/105 MS: 1 InsertRepeatedBytes- 00:07:53.759 [2024-07-13 19:55:41.415768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.759 [2024-07-13 19:55:41.415800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.759 [2024-07-13 19:55:41.415904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.759 [2024-07-13 19:55:41.415931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.018 #53 NEW cov: 12148 ft: 15091 corp: 34/1958b lim: 105 exec/s: 53 rss: 71Mb L: 46/105 MS: 1 ChangeBit- 00:07:54.018 [2024-07-13 19:55:41.455938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.018 [2024-07-13 19:55:41.455967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.018 [2024-07-13 19:55:41.456078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080808863958804391 len:54292 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.018 [2024-07-13 19:55:41.456102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.018 [2024-07-13 19:55:41.456229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12080808863958804391 len:2728 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.018 [2024-07-13 19:55:41.456252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.018 [2024-07-13 19:55:41.456385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7016996765293437281 len:24930 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.018 [2024-07-13 19:55:41.456413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.018 #54 NEW cov: 12148 ft: 15097 corp: 35/2049b lim: 105 exec/s: 54 rss: 71Mb L: 91/105 MS: 1 CrossOver- 00:07:54.018 [2024-07-13 19:55:41.496255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12080808863958804391 len:42920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.018 [2024-07-13 19:55:41.496289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.018 [2024-07-13 19:55:41.496374] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12080731597497149351 len:24930 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.018 [2024-07-13 19:55:41.496395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.018 [2024-07-13 19:55:41.496535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:7812738666512280684 len:27757 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.018 [2024-07-13 19:55:41.496558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.018 [2024-07-13 19:55:41.496694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:7016996764320358753 len:24930 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.018 [2024-07-13 19:55:41.496718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.019 #55 NEW cov: 12148 ft: 15105 corp: 36/2153b lim: 105 exec/s: 27 rss: 71Mb L: 104/105 MS: 1 InsertRepeatedBytes- 00:07:54.019 #55 DONE cov: 12148 ft: 15105 corp: 36/2153b lim: 105 exec/s: 27 rss: 71Mb 00:07:54.019 ###### Recommended dictionary. ###### 00:07:54.019 ";\247\324\023\000\000\000\000" # Uses: 4 00:07:54.019 "\004\000\000\000" # Uses: 0 00:07:54.019 ###### End of recommended dictionary. ###### 00:07:54.019 Done 55 runs in 2 second(s) 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4417 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:54.019 19:55:41 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:07:54.278 [2024-07-13 19:55:41.689505] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:54.278 [2024-07-13 19:55:41.689567] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3681117 ] 00:07:54.278 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.537 [2024-07-13 19:55:41.954122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.537 [2024-07-13 19:55:41.981826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.537 [2024-07-13 19:55:42.034163] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.537 [2024-07-13 19:55:42.050479] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:54.537 INFO: Running with entropic power schedule (0xFF, 100). 00:07:54.537 INFO: Seed: 2120320825 00:07:54.537 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:54.537 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:54.537 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:54.537 INFO: A corpus is not provided, starting from an empty corpus 00:07:54.537 #2 INITED exec/s: 0 rss: 62Mb 00:07:54.537 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:54.537 This may also happen if the target rejected all inputs we tried so far 00:07:54.537 [2024-07-13 19:55:42.105548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072417705983 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.537 [2024-07-13 19:55:42.105579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.796 NEW_FUNC[1/692]: 0x4ad7f0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:54.796 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:54.796 #9 NEW cov: 11924 ft: 11926 corp: 2/27b lim: 120 exec/s: 0 rss: 68Mb L: 26/26 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:54.796 [2024-07-13 19:55:42.436427] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070499210985471 len:47104 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.796 [2024-07-13 19:55:42.436482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.055 NEW_FUNC[1/1]: 0xf5a050 in spdk_get_ticks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:296 00:07:55.055 #13 NEW cov: 12055 ft: 12545 corp: 3/57b lim: 120 exec/s: 0 rss: 70Mb L: 30/30 MS: 4 EraseBytes-ChangeByte-InsertByte-CopyPart- 00:07:55.055 [2024-07-13 19:55:42.496402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072417705795 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.055 [2024-07-13 19:55:42.496433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.055 #14 NEW cov: 12061 ft: 12875 corp: 4/84b lim: 120 exec/s: 0 rss: 70Mb L: 27/30 MS: 1 InsertByte- 00:07:55.055 [2024-07-13 19:55:42.536728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072417705983 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.055 [2024-07-13 19:55:42.536756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.055 [2024-07-13 19:55:42.536824] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.055 [2024-07-13 19:55:42.536839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.055 #15 NEW cov: 12146 ft: 14013 corp: 5/136b lim: 120 exec/s: 0 rss: 70Mb L: 52/52 MS: 1 CopyPart- 00:07:55.055 [2024-07-13 19:55:42.576945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072417705983 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.055 [2024-07-13 19:55:42.576972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.055 [2024-07-13 19:55:42.577010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446659411314212863 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.055 [2024-07-13 19:55:42.577026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.055 [2024-07-13 19:55:42.577078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.055 [2024-07-13 19:55:42.577093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.055 #16 NEW cov: 12146 ft: 14404 corp: 6/224b lim: 120 exec/s: 0 rss: 70Mb L: 88/88 MS: 1 CopyPart- 00:07:55.055 [2024-07-13 19:55:42.626836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070499210985471 len:47104 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.055 [2024-07-13 19:55:42.626862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.055 #17 NEW cov: 12146 ft: 14492 corp: 7/254b lim: 120 exec/s: 0 rss: 70Mb L: 30/88 MS: 1 ShuffleBytes- 00:07:55.055 [2024-07-13 19:55:42.676952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070499210985471 len:47104 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.055 [2024-07-13 19:55:42.676981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.055 #18 NEW cov: 12146 ft: 14630 corp: 8/284b lim: 120 exec/s: 0 rss: 70Mb L: 30/88 MS: 1 CrossOver- 00:07:55.315 [2024-07-13 19:55:42.727071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070499210985471 len:47104 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.315 [2024-07-13 19:55:42.727098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.315 #24 NEW cov: 12146 ft: 14685 corp: 9/314b lim: 120 exec/s: 0 rss: 70Mb L: 30/88 MS: 1 ChangeByte- 00:07:55.315 [2024-07-13 19:55:42.767172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070194268307455 len:47104 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.315 [2024-07-13 19:55:42.767199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.315 #25 NEW cov: 12146 ft: 14698 corp: 10/344b lim: 120 exec/s: 0 rss: 70Mb L: 30/88 MS: 1 ChangeByte- 00:07:55.315 [2024-07-13 19:55:42.807395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072417705983 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.315 [2024-07-13 19:55:42.807422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.315 [2024-07-13 19:55:42.807487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.315 [2024-07-13 19:55:42.807503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.315 #30 NEW cov: 12146 ft: 14749 corp: 11/408b lim: 120 exec/s: 0 rss: 70Mb L: 64/88 MS: 5 EraseBytes-InsertByte-ChangeBinInt-EraseBytes-CrossOver- 00:07:55.315 [2024-07-13 19:55:42.857431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070194268307455 len:47104 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.315 [2024-07-13 19:55:42.857464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.315 #31 NEW cov: 12146 ft: 14755 corp: 12/440b lim: 120 exec/s: 0 rss: 70Mb L: 32/88 MS: 1 CopyPart- 00:07:55.316 [2024-07-13 19:55:42.907718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070194268307455 len:65459 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.316 [2024-07-13 19:55:42.907745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.316 [2024-07-13 19:55:42.907806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13258597302978740223 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.316 [2024-07-13 19:55:42.907822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.316 #32 NEW cov: 12146 ft: 14777 corp: 13/500b lim: 120 exec/s: 0 rss: 70Mb L: 60/88 MS: 1 CopyPart- 00:07:55.316 [2024-07-13 19:55:42.957705] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070499210985471 len:65464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.316 [2024-07-13 19:55:42.957732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.575 #33 NEW cov: 12146 ft: 14799 corp: 14/530b lim: 120 exec/s: 0 rss: 70Mb L: 30/88 MS: 1 ShuffleBytes- 00:07:55.575 [2024-07-13 19:55:42.997941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070194268307455 len:65459 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.575 [2024-07-13 19:55:42.997967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.575 [2024-07-13 19:55:42.998020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13258597302978740223 len:11520 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.575 [2024-07-13 19:55:42.998036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.575 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:55.575 #34 NEW cov: 12169 ft: 14839 corp: 15/590b lim: 120 exec/s: 0 rss: 70Mb L: 60/88 MS: 1 ChangeByte- 00:07:55.575 [2024-07-13 19:55:43.047956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070387541835775 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.575 [2024-07-13 19:55:43.047987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.575 #35 NEW cov: 12169 ft: 14854 corp: 16/620b lim: 120 exec/s: 0 rss: 70Mb L: 30/88 MS: 1 CrossOver- 00:07:55.575 [2024-07-13 19:55:43.098074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070194268307455 len:47104 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.575 [2024-07-13 19:55:43.098101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.575 #36 NEW cov: 12169 ft: 14879 corp: 17/652b lim: 120 exec/s: 36 rss: 70Mb L: 32/88 MS: 1 ChangeBinInt- 00:07:55.575 [2024-07-13 19:55:43.138207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072417705795 len:32768 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.575 [2024-07-13 19:55:43.138235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.575 #37 NEW cov: 12169 ft: 14897 corp: 18/679b lim: 120 exec/s: 37 rss: 70Mb L: 27/88 MS: 1 ChangeBit- 00:07:55.575 [2024-07-13 19:55:43.178319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446659410022366986 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.575 [2024-07-13 19:55:43.178347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.575 #38 NEW cov: 12169 ft: 14919 corp: 19/710b lim: 120 exec/s: 38 rss: 70Mb L: 31/88 MS: 1 CrossOver- 00:07:55.575 [2024-07-13 19:55:43.228602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070194268307455 len:65459 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.575 [2024-07-13 19:55:43.228630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.575 [2024-07-13 19:55:43.228675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13258597302978740223 len:11520 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.575 [2024-07-13 19:55:43.228692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.834 #39 NEW cov: 12169 ft: 14924 corp: 20/770b lim: 120 exec/s: 39 rss: 70Mb L: 60/88 MS: 1 ChangeBinInt- 00:07:55.834 [2024-07-13 19:55:43.278880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072417705983 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.834 [2024-07-13 19:55:43.278908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.834 [2024-07-13 19:55:43.278945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446659411314212863 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.834 [2024-07-13 19:55:43.278961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.834 [2024-07-13 19:55:43.279013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.834 [2024-07-13 19:55:43.279043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.834 #40 NEW cov: 12169 ft: 15016 corp: 21/858b lim: 120 exec/s: 40 rss: 71Mb L: 88/88 MS: 1 ShuffleBytes- 00:07:55.834 [2024-07-13 19:55:43.328875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072417705983 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.834 [2024-07-13 19:55:43.328902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.834 [2024-07-13 19:55:43.328954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.834 [2024-07-13 19:55:43.328972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.834 #41 NEW cov: 12169 ft: 15032 corp: 22/922b lim: 120 exec/s: 41 rss: 71Mb L: 64/88 MS: 1 ChangeBinInt- 00:07:55.834 [2024-07-13 19:55:43.378870] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072417705795 len:32768 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.834 [2024-07-13 19:55:43.378899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.834 #42 NEW cov: 12169 ft: 15071 corp: 23/950b lim: 120 exec/s: 42 rss: 71Mb L: 28/88 MS: 1 CrossOver- 00:07:55.834 [2024-07-13 19:55:43.429284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072417705795 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.834 [2024-07-13 19:55:43.429310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.834 [2024-07-13 19:55:43.429355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:17144620962926161389 len:60910 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.834 [2024-07-13 19:55:43.429371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.834 [2024-07-13 19:55:43.429422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:17144620962624171501 len:60910 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.834 [2024-07-13 19:55:43.429437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.834 #43 NEW cov: 12169 ft: 15076 corp: 24/1035b lim: 120 exec/s: 43 rss: 71Mb L: 85/88 MS: 1 InsertRepeatedBytes- 00:07:55.834 [2024-07-13 19:55:43.469128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12898309331497254911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.834 [2024-07-13 19:55:43.469156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.834 #44 NEW cov: 12169 ft: 15097 corp: 25/1064b lim: 120 exec/s: 44 rss: 71Mb L: 29/88 MS: 1 EraseBytes- 00:07:56.092 [2024-07-13 19:55:43.509241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446659410022366986 len:65510 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.092 [2024-07-13 19:55:43.509269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.092 #45 NEW cov: 12169 ft: 15102 corp: 26/1095b lim: 120 exec/s: 45 rss: 71Mb L: 31/88 MS: 1 ShuffleBytes- 00:07:56.092 [2024-07-13 19:55:43.559399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072417705795 len:32768 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.093 [2024-07-13 19:55:43.559427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.093 #51 NEW cov: 12169 ft: 15129 corp: 27/1122b lim: 120 exec/s: 51 rss: 71Mb L: 27/88 MS: 1 ShuffleBytes- 00:07:56.093 [2024-07-13 19:55:43.599514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12898309331497254911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.093 [2024-07-13 19:55:43.599542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.093 #52 NEW cov: 12169 ft: 15141 corp: 28/1151b lim: 120 exec/s: 52 rss: 71Mb L: 29/88 MS: 1 ShuffleBytes- 00:07:56.093 [2024-07-13 19:55:43.649655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:12898309332789100543 len:65317 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.093 [2024-07-13 19:55:43.649682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.093 #57 NEW cov: 12169 ft: 15151 corp: 29/1187b lim: 120 exec/s: 57 rss: 71Mb L: 36/88 MS: 5 ChangeByte-ShuffleBytes-CrossOver-InsertByte-InsertRepeatedBytes- 00:07:56.093 [2024-07-13 19:55:43.689797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070194268307455 len:47104 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.093 [2024-07-13 19:55:43.689834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.093 #58 NEW cov: 12169 ft: 15152 corp: 30/1219b lim: 120 exec/s: 58 rss: 71Mb L: 32/88 MS: 1 ChangeByte- 00:07:56.093 [2024-07-13 19:55:43.730330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:15914838023672224988 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.093 [2024-07-13 19:55:43.730356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.093 [2024-07-13 19:55:43.730422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.093 [2024-07-13 19:55:43.730438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.093 [2024-07-13 19:55:43.730509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.093 [2024-07-13 19:55:43.730524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.093 [2024-07-13 19:55:43.730576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.093 [2024-07-13 19:55:43.730590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.352 #59 NEW cov: 12169 ft: 15514 corp: 31/1316b lim: 120 exec/s: 59 rss: 71Mb L: 97/97 MS: 1 InsertRepeatedBytes- 00:07:56.352 [2024-07-13 19:55:43.769980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18424985836815646719 len:65464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.352 [2024-07-13 19:55:43.770007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.352 #60 NEW cov: 12169 ft: 15596 corp: 32/1346b lim: 120 exec/s: 60 rss: 71Mb L: 30/97 MS: 1 CopyPart- 00:07:56.352 [2024-07-13 19:55:43.810254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070499210985471 len:47104 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.352 [2024-07-13 19:55:43.810282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.352 [2024-07-13 19:55:43.810346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13258597302978740223 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.352 [2024-07-13 19:55:43.810361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.352 #61 NEW cov: 12169 ft: 15609 corp: 33/1401b lim: 120 exec/s: 61 rss: 71Mb L: 55/97 MS: 1 CrossOver- 00:07:56.352 [2024-07-13 19:55:43.860219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070194268307282 len:47104 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.352 [2024-07-13 19:55:43.860246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.352 #62 NEW cov: 12169 ft: 15640 corp: 34/1431b lim: 120 exec/s: 62 rss: 71Mb L: 30/97 MS: 1 ChangeByte- 00:07:56.352 [2024-07-13 19:55:43.900326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070194268307282 len:47104 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.352 [2024-07-13 19:55:43.900356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.352 #63 NEW cov: 12169 ft: 15698 corp: 35/1461b lim: 120 exec/s: 63 rss: 71Mb L: 30/97 MS: 1 ChangeBinInt- 00:07:56.352 [2024-07-13 19:55:43.950666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070194268307455 len:65459 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.352 [2024-07-13 19:55:43.950692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.352 [2024-07-13 19:55:43.950749] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13258597302978740223 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.352 [2024-07-13 19:55:43.950765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.352 #64 NEW cov: 12169 ft: 15702 corp: 36/1521b lim: 120 exec/s: 64 rss: 71Mb L: 60/97 MS: 1 ShuffleBytes- 00:07:56.352 [2024-07-13 19:55:43.990758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070194268307455 len:65459 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.352 [2024-07-13 19:55:43.990784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.352 [2024-07-13 19:55:43.990821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13258588506885718015 len:11520 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.352 [2024-07-13 19:55:43.990835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.610 #65 NEW cov: 12169 ft: 15718 corp: 37/1581b lim: 120 exec/s: 65 rss: 71Mb L: 60/97 MS: 1 ChangeBit- 00:07:56.610 [2024-07-13 19:55:44.040876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744072417705983 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.610 [2024-07-13 19:55:44.040902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.610 [2024-07-13 19:55:44.040939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.610 [2024-07-13 19:55:44.040954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.610 #66 NEW cov: 12169 ft: 15748 corp: 38/1645b lim: 120 exec/s: 66 rss: 72Mb L: 64/97 MS: 1 ChangeBit- 00:07:56.610 [2024-07-13 19:55:44.091016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18425070194268307455 len:1972 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.610 [2024-07-13 19:55:44.091042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.610 [2024-07-13 19:55:44.091106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13258597302978740223 len:11520 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:56.610 [2024-07-13 19:55:44.091123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.610 #67 NEW cov: 12169 ft: 15753 corp: 39/1705b lim: 120 exec/s: 33 rss: 72Mb L: 60/97 MS: 1 ChangeBinInt- 00:07:56.610 #67 DONE cov: 12169 ft: 15753 corp: 39/1705b lim: 120 exec/s: 33 rss: 72Mb 00:07:56.610 Done 67 runs in 2 second(s) 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4418 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:56.610 19:55:44 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:07:56.610 [2024-07-13 19:55:44.270359] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:56.610 [2024-07-13 19:55:44.270461] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3681634 ] 00:07:56.868 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.868 [2024-07-13 19:55:44.524877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.126 [2024-07-13 19:55:44.555698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.126 [2024-07-13 19:55:44.607949] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.126 [2024-07-13 19:55:44.624259] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:57.126 INFO: Running with entropic power schedule (0xFF, 100). 00:07:57.126 INFO: Seed: 399330901 00:07:57.126 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:57.126 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:57.126 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:57.126 INFO: A corpus is not provided, starting from an empty corpus 00:07:57.126 #2 INITED exec/s: 0 rss: 62Mb 00:07:57.126 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:57.126 This may also happen if the target rejected all inputs we tried so far 00:07:57.126 [2024-07-13 19:55:44.679601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.126 [2024-07-13 19:55:44.679630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.126 [2024-07-13 19:55:44.679667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.126 [2024-07-13 19:55:44.679683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.126 [2024-07-13 19:55:44.679740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.126 [2024-07-13 19:55:44.679756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.384 NEW_FUNC[1/691]: 0x4b10e0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:57.384 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:57.384 #25 NEW cov: 11868 ft: 11867 corp: 2/68b lim: 100 exec/s: 0 rss: 69Mb L: 67/67 MS: 3 ChangeByte-ChangeByte-InsertRepeatedBytes- 00:07:57.384 [2024-07-13 19:55:45.010364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.384 [2024-07-13 19:55:45.010398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.384 [2024-07-13 19:55:45.010469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.384 [2024-07-13 19:55:45.010484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.384 [2024-07-13 19:55:45.010546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.384 [2024-07-13 19:55:45.010560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.384 #27 NEW cov: 11998 ft: 12362 corp: 3/132b lim: 100 exec/s: 0 rss: 69Mb L: 64/67 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:57.642 [2024-07-13 19:55:45.050412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.642 [2024-07-13 19:55:45.050448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.642 [2024-07-13 19:55:45.050485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.643 [2024-07-13 19:55:45.050502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.643 [2024-07-13 19:55:45.050556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.643 [2024-07-13 19:55:45.050571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.643 #33 NEW cov: 12004 ft: 12503 corp: 4/210b lim: 100 exec/s: 0 rss: 69Mb L: 78/78 MS: 1 InsertRepeatedBytes- 00:07:57.643 [2024-07-13 19:55:45.100297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.643 [2024-07-13 19:55:45.100323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.643 #47 NEW cov: 12089 ft: 13235 corp: 5/245b lim: 100 exec/s: 0 rss: 69Mb L: 35/78 MS: 4 ChangeBit-ChangeBit-InsertRepeatedBytes-CrossOver- 00:07:57.643 [2024-07-13 19:55:45.140468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.643 [2024-07-13 19:55:45.140494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.643 #48 NEW cov: 12089 ft: 13411 corp: 6/274b lim: 100 exec/s: 0 rss: 69Mb L: 29/78 MS: 1 EraseBytes- 00:07:57.643 [2024-07-13 19:55:45.190798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.643 [2024-07-13 19:55:45.190824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.643 [2024-07-13 19:55:45.190868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.643 [2024-07-13 19:55:45.190899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.643 [2024-07-13 19:55:45.190954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.643 [2024-07-13 19:55:45.190968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.643 #49 NEW cov: 12089 ft: 13485 corp: 7/341b lim: 100 exec/s: 0 rss: 69Mb L: 67/78 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:57.643 [2024-07-13 19:55:45.230736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.643 [2024-07-13 19:55:45.230761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.643 #50 NEW cov: 12089 ft: 13537 corp: 8/376b lim: 100 exec/s: 0 rss: 69Mb L: 35/78 MS: 1 ChangeBit- 00:07:57.643 [2024-07-13 19:55:45.270812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.643 [2024-07-13 19:55:45.270837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.902 #56 NEW cov: 12089 ft: 13580 corp: 9/411b lim: 100 exec/s: 0 rss: 69Mb L: 35/78 MS: 1 ChangeByte- 00:07:57.902 [2024-07-13 19:55:45.321022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.902 [2024-07-13 19:55:45.321048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.902 #62 NEW cov: 12089 ft: 13622 corp: 10/440b lim: 100 exec/s: 0 rss: 70Mb L: 29/78 MS: 1 ChangeBit- 00:07:57.902 [2024-07-13 19:55:45.371156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.902 [2024-07-13 19:55:45.371184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.902 #63 NEW cov: 12089 ft: 13666 corp: 11/477b lim: 100 exec/s: 0 rss: 70Mb L: 37/78 MS: 1 CopyPart- 00:07:57.902 [2024-07-13 19:55:45.411482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.902 [2024-07-13 19:55:45.411508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.902 [2024-07-13 19:55:45.411553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.902 [2024-07-13 19:55:45.411568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.902 [2024-07-13 19:55:45.411623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.902 [2024-07-13 19:55:45.411639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.902 #65 NEW cov: 12089 ft: 13689 corp: 12/541b lim: 100 exec/s: 0 rss: 70Mb L: 64/78 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:57.902 [2024-07-13 19:55:45.441308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.902 [2024-07-13 19:55:45.441334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.902 #66 NEW cov: 12089 ft: 13770 corp: 13/577b lim: 100 exec/s: 0 rss: 70Mb L: 36/78 MS: 1 InsertByte- 00:07:57.902 [2024-07-13 19:55:45.481467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.902 [2024-07-13 19:55:45.481492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.902 #67 NEW cov: 12089 ft: 13787 corp: 14/612b lim: 100 exec/s: 0 rss: 70Mb L: 35/78 MS: 1 CrossOver- 00:07:57.902 [2024-07-13 19:55:45.521606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.902 [2024-07-13 19:55:45.521631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.902 #73 NEW cov: 12089 ft: 13832 corp: 15/641b lim: 100 exec/s: 0 rss: 70Mb L: 29/78 MS: 1 CopyPart- 00:07:57.902 [2024-07-13 19:55:45.561679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.902 [2024-07-13 19:55:45.561705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.161 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:58.161 #74 NEW cov: 12112 ft: 13890 corp: 16/676b lim: 100 exec/s: 0 rss: 70Mb L: 35/78 MS: 1 ChangeBinInt- 00:07:58.161 [2024-07-13 19:55:45.611812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.161 [2024-07-13 19:55:45.611838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.161 #75 NEW cov: 12112 ft: 13921 corp: 17/713b lim: 100 exec/s: 0 rss: 70Mb L: 37/78 MS: 1 ChangeBit- 00:07:58.161 [2024-07-13 19:55:45.661965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.161 [2024-07-13 19:55:45.661990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.161 #77 NEW cov: 12112 ft: 13936 corp: 18/749b lim: 100 exec/s: 77 rss: 70Mb L: 36/78 MS: 2 CrossOver-CrossOver- 00:07:58.161 [2024-07-13 19:55:45.702254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.161 [2024-07-13 19:55:45.702278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.161 [2024-07-13 19:55:45.702342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:58.161 [2024-07-13 19:55:45.702357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.161 [2024-07-13 19:55:45.702408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:58.161 [2024-07-13 19:55:45.702422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.161 #78 NEW cov: 12112 ft: 13992 corp: 19/816b lim: 100 exec/s: 78 rss: 70Mb L: 67/78 MS: 1 ChangeByte- 00:07:58.161 [2024-07-13 19:55:45.742185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.161 [2024-07-13 19:55:45.742210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.161 #84 NEW cov: 12112 ft: 14058 corp: 20/853b lim: 100 exec/s: 84 rss: 70Mb L: 37/78 MS: 1 CopyPart- 00:07:58.161 [2024-07-13 19:55:45.782269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.161 [2024-07-13 19:55:45.782294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.161 #85 NEW cov: 12112 ft: 14135 corp: 21/888b lim: 100 exec/s: 85 rss: 70Mb L: 35/78 MS: 1 ChangeBit- 00:07:58.421 [2024-07-13 19:55:45.832456] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.421 [2024-07-13 19:55:45.832480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.421 #86 NEW cov: 12112 ft: 14218 corp: 22/923b lim: 100 exec/s: 86 rss: 70Mb L: 35/78 MS: 1 ChangeByte- 00:07:58.421 [2024-07-13 19:55:45.872588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.421 [2024-07-13 19:55:45.872614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.421 #87 NEW cov: 12112 ft: 14224 corp: 23/958b lim: 100 exec/s: 87 rss: 70Mb L: 35/78 MS: 1 ChangeBinInt- 00:07:58.421 [2024-07-13 19:55:45.922941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.421 [2024-07-13 19:55:45.922966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.421 [2024-07-13 19:55:45.923014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:58.421 [2024-07-13 19:55:45.923029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.421 [2024-07-13 19:55:45.923080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:58.421 [2024-07-13 19:55:45.923096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.421 #88 NEW cov: 12112 ft: 14234 corp: 24/1024b lim: 100 exec/s: 88 rss: 70Mb L: 66/78 MS: 1 EraseBytes- 00:07:58.421 [2024-07-13 19:55:45.962928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.421 [2024-07-13 19:55:45.962952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.421 [2024-07-13 19:55:45.963005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:58.421 [2024-07-13 19:55:45.963020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.421 #89 NEW cov: 12112 ft: 14491 corp: 25/1067b lim: 100 exec/s: 89 rss: 70Mb L: 43/78 MS: 1 CMP- DE: "\0179;\264n\264)\000"- 00:07:58.421 [2024-07-13 19:55:46.003153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.421 [2024-07-13 19:55:46.003178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.421 [2024-07-13 19:55:46.003212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:58.421 [2024-07-13 19:55:46.003227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.421 [2024-07-13 19:55:46.003276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:58.421 [2024-07-13 19:55:46.003291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.421 #90 NEW cov: 12112 ft: 14522 corp: 26/1134b lim: 100 exec/s: 90 rss: 70Mb L: 67/78 MS: 1 ShuffleBytes- 00:07:58.421 [2024-07-13 19:55:46.053107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.421 [2024-07-13 19:55:46.053132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.680 #91 NEW cov: 12112 ft: 14614 corp: 27/1166b lim: 100 exec/s: 91 rss: 70Mb L: 32/78 MS: 1 EraseBytes- 00:07:58.680 [2024-07-13 19:55:46.103242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.680 [2024-07-13 19:55:46.103268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.680 #92 NEW cov: 12112 ft: 14618 corp: 28/1195b lim: 100 exec/s: 92 rss: 70Mb L: 29/78 MS: 1 ChangeByte- 00:07:58.680 [2024-07-13 19:55:46.143369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.680 [2024-07-13 19:55:46.143395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.680 #93 NEW cov: 12112 ft: 14722 corp: 29/1224b lim: 100 exec/s: 93 rss: 70Mb L: 29/78 MS: 1 ChangeByte- 00:07:58.680 [2024-07-13 19:55:46.193785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.680 [2024-07-13 19:55:46.193812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.680 [2024-07-13 19:55:46.193846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:58.680 [2024-07-13 19:55:46.193859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.680 [2024-07-13 19:55:46.193910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:58.681 [2024-07-13 19:55:46.193924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.681 #94 NEW cov: 12112 ft: 14736 corp: 30/1295b lim: 100 exec/s: 94 rss: 70Mb L: 71/78 MS: 1 InsertRepeatedBytes- 00:07:58.681 [2024-07-13 19:55:46.243623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.681 [2024-07-13 19:55:46.243649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.681 #95 NEW cov: 12112 ft: 14780 corp: 31/1330b lim: 100 exec/s: 95 rss: 70Mb L: 35/78 MS: 1 ChangeBit- 00:07:58.681 [2024-07-13 19:55:46.283964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.681 [2024-07-13 19:55:46.283989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.681 [2024-07-13 19:55:46.284036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:58.681 [2024-07-13 19:55:46.284051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.681 [2024-07-13 19:55:46.284104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:58.681 [2024-07-13 19:55:46.284119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.681 #96 NEW cov: 12112 ft: 14800 corp: 32/1408b lim: 100 exec/s: 96 rss: 70Mb L: 78/78 MS: 1 InsertRepeatedBytes- 00:07:58.681 [2024-07-13 19:55:46.333864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.681 [2024-07-13 19:55:46.333890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.940 #97 NEW cov: 12112 ft: 14823 corp: 33/1443b lim: 100 exec/s: 97 rss: 70Mb L: 35/78 MS: 1 ChangeBit- 00:07:58.940 [2024-07-13 19:55:46.374232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.940 [2024-07-13 19:55:46.374257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.940 [2024-07-13 19:55:46.374291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:58.940 [2024-07-13 19:55:46.374305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.940 [2024-07-13 19:55:46.374355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:58.940 [2024-07-13 19:55:46.374371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.940 #98 NEW cov: 12112 ft: 14831 corp: 34/1510b lim: 100 exec/s: 98 rss: 70Mb L: 67/78 MS: 1 CMP- DE: "?\000\000\000\000\000\000\000"- 00:07:58.940 [2024-07-13 19:55:46.424279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.940 [2024-07-13 19:55:46.424304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.940 [2024-07-13 19:55:46.424341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:58.940 [2024-07-13 19:55:46.424355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.940 #99 NEW cov: 12112 ft: 14850 corp: 35/1553b lim: 100 exec/s: 99 rss: 70Mb L: 43/78 MS: 1 PersAutoDict- DE: "\0179;\264n\264)\000"- 00:07:58.940 [2024-07-13 19:55:46.474540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.940 [2024-07-13 19:55:46.474566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.940 [2024-07-13 19:55:46.474612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:58.940 [2024-07-13 19:55:46.474626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.940 [2024-07-13 19:55:46.474682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:58.940 [2024-07-13 19:55:46.474694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.940 #100 NEW cov: 12112 ft: 14869 corp: 36/1617b lim: 100 exec/s: 100 rss: 71Mb L: 64/78 MS: 1 CMP- DE: "\001\000\000\010"- 00:07:58.940 [2024-07-13 19:55:46.524564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.940 [2024-07-13 19:55:46.524589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.940 [2024-07-13 19:55:46.524625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:58.940 [2024-07-13 19:55:46.524638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.940 #101 NEW cov: 12112 ft: 14874 corp: 37/1661b lim: 100 exec/s: 101 rss: 71Mb L: 44/78 MS: 1 InsertByte- 00:07:58.940 [2024-07-13 19:55:46.574566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.940 [2024-07-13 19:55:46.574592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.200 #102 NEW cov: 12112 ft: 14891 corp: 38/1690b lim: 100 exec/s: 102 rss: 71Mb L: 29/78 MS: 1 ShuffleBytes- 00:07:59.200 [2024-07-13 19:55:46.624805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:59.200 [2024-07-13 19:55:46.624831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.200 [2024-07-13 19:55:46.624882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:59.200 [2024-07-13 19:55:46.624898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.200 #103 NEW cov: 12112 ft: 14900 corp: 39/1734b lim: 100 exec/s: 51 rss: 71Mb L: 44/78 MS: 1 ShuffleBytes- 00:07:59.200 #103 DONE cov: 12112 ft: 14900 corp: 39/1734b lim: 100 exec/s: 51 rss: 71Mb 00:07:59.200 ###### Recommended dictionary. ###### 00:07:59.200 "\000\000\000\000\000\000\000\000" # Uses: 1 00:07:59.200 "\0179;\264n\264)\000" # Uses: 1 00:07:59.200 "?\000\000\000\000\000\000\000" # Uses: 0 00:07:59.200 "\001\000\000\010" # Uses: 0 00:07:59.200 ###### End of recommended dictionary. ###### 00:07:59.200 Done 103 runs in 2 second(s) 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:59.200 19:55:46 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:59.200 [2024-07-13 19:55:46.814972] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:59.200 [2024-07-13 19:55:46.815048] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3682164 ] 00:07:59.200 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.460 [2024-07-13 19:55:47.067384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.460 [2024-07-13 19:55:47.098367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.719 [2024-07-13 19:55:47.151419] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.719 [2024-07-13 19:55:47.167716] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:59.719 INFO: Running with entropic power schedule (0xFF, 100). 00:07:59.719 INFO: Seed: 2943314485 00:07:59.719 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:07:59.719 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:07:59.719 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:59.719 INFO: A corpus is not provided, starting from an empty corpus 00:07:59.719 #2 INITED exec/s: 0 rss: 62Mb 00:07:59.719 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:59.719 This may also happen if the target rejected all inputs we tried so far 00:07:59.719 [2024-07-13 19:55:47.232966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600440217627370 len:60139 00:07:59.719 [2024-07-13 19:55:47.232996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.719 [2024-07-13 19:55:47.233066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941482 len:60139 00:07:59.719 [2024-07-13 19:55:47.233083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.978 NEW_FUNC[1/691]: 0x4b40a0 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:59.978 NEW_FUNC[2/691]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:59.978 #5 NEW cov: 11846 ft: 11847 corp: 2/25b lim: 50 exec/s: 0 rss: 69Mb L: 24/24 MS: 3 ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:59.978 [2024-07-13 19:55:47.563904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600440217627370 len:60139 00:07:59.978 [2024-07-13 19:55:47.563970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.978 [2024-07-13 19:55:47.564056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941289 len:60139 00:07:59.978 [2024-07-13 19:55:47.564088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.978 #6 NEW cov: 11976 ft: 12560 corp: 3/49b lim: 50 exec/s: 0 rss: 69Mb L: 24/24 MS: 1 ChangeByte- 00:07:59.978 [2024-07-13 19:55:47.623860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:169934848 len:1 00:07:59.978 [2024-07-13 19:55:47.623893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.978 [2024-07-13 19:55:47.623946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:59.978 [2024-07-13 19:55:47.623962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.237 #8 NEW cov: 11982 ft: 12846 corp: 4/71b lim: 50 exec/s: 0 rss: 69Mb L: 22/24 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:00.237 [2024-07-13 19:55:47.663975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600448807561962 len:60139 00:08:00.237 [2024-07-13 19:55:47.664003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.237 [2024-07-13 19:55:47.664047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941482 len:60139 00:08:00.237 [2024-07-13 19:55:47.664062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.237 #9 NEW cov: 12067 ft: 13146 corp: 5/95b lim: 50 exec/s: 0 rss: 69Mb L: 24/24 MS: 1 ChangeBinInt- 00:08:00.237 [2024-07-13 19:55:47.704122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600440217627370 len:60139 00:08:00.237 [2024-07-13 19:55:47.704150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.237 [2024-07-13 19:55:47.704203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941289 len:60139 00:08:00.237 [2024-07-13 19:55:47.704220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.237 #10 NEW cov: 12067 ft: 13226 corp: 6/119b lim: 50 exec/s: 0 rss: 69Mb L: 24/24 MS: 1 ShuffleBytes- 00:08:00.237 [2024-07-13 19:55:47.754241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1591201223332784874 len:5398 00:08:00.237 [2024-07-13 19:55:47.754269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.237 [2024-07-13 19:55:47.754326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600440522435306 len:60139 00:08:00.237 [2024-07-13 19:55:47.754343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.237 #16 NEW cov: 12067 ft: 13305 corp: 7/143b lim: 50 exec/s: 0 rss: 69Mb L: 24/24 MS: 1 ChangeBinInt- 00:08:00.237 [2024-07-13 19:55:47.794385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600440217627370 len:60139 00:08:00.237 [2024-07-13 19:55:47.794413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.237 [2024-07-13 19:55:47.794486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941289 len:60139 00:08:00.237 [2024-07-13 19:55:47.794502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.237 #17 NEW cov: 12067 ft: 13381 corp: 8/167b lim: 50 exec/s: 0 rss: 69Mb L: 24/24 MS: 1 CrossOver- 00:08:00.237 [2024-07-13 19:55:47.834396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:00.237 [2024-07-13 19:55:47.834424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.237 #18 NEW cov: 12067 ft: 13717 corp: 9/183b lim: 50 exec/s: 0 rss: 69Mb L: 16/24 MS: 1 InsertRepeatedBytes- 00:08:00.237 [2024-07-13 19:55:47.874836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1591201223332784874 len:5398 00:08:00.237 [2024-07-13 19:55:47.874866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.237 [2024-07-13 19:55:47.874901] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600440522435306 len:5654 00:08:00.237 [2024-07-13 19:55:47.874916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.237 [2024-07-13 19:55:47.874967] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1519378740404360469 len:60139 00:08:00.237 [2024-07-13 19:55:47.874983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.237 [2024-07-13 19:55:47.875034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16927600444109941482 len:60139 00:08:00.237 [2024-07-13 19:55:47.875048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.497 #19 NEW cov: 12067 ft: 14058 corp: 10/229b lim: 50 exec/s: 0 rss: 69Mb L: 46/46 MS: 1 CopyPart- 00:08:00.497 [2024-07-13 19:55:47.924633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:00.497 [2024-07-13 19:55:47.924660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.497 #20 NEW cov: 12067 ft: 14118 corp: 11/244b lim: 50 exec/s: 0 rss: 70Mb L: 15/46 MS: 1 EraseBytes- 00:08:00.497 [2024-07-13 19:55:47.974784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:00.497 [2024-07-13 19:55:47.974812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.497 #21 NEW cov: 12067 ft: 14140 corp: 12/260b lim: 50 exec/s: 0 rss: 70Mb L: 16/46 MS: 1 CopyPart- 00:08:00.497 [2024-07-13 19:55:48.014993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13003375013217238824 len:23467 00:08:00.497 [2024-07-13 19:55:48.015021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.497 [2024-07-13 19:55:48.015082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:00.497 [2024-07-13 19:55:48.015098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.497 #22 NEW cov: 12067 ft: 14163 corp: 13/284b lim: 50 exec/s: 0 rss: 70Mb L: 24/46 MS: 1 CMP- DE: "\377(\264uD\256[\252"- 00:08:00.497 [2024-07-13 19:55:48.055144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600440217627370 len:60139 00:08:00.497 [2024-07-13 19:55:48.055172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.497 [2024-07-13 19:55:48.055224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941289 len:60139 00:08:00.497 [2024-07-13 19:55:48.055238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.497 #23 NEW cov: 12067 ft: 14187 corp: 14/308b lim: 50 exec/s: 0 rss: 70Mb L: 24/46 MS: 1 ShuffleBytes- 00:08:00.497 [2024-07-13 19:55:48.105475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1591201223332784874 len:5398 00:08:00.497 [2024-07-13 19:55:48.105502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.497 [2024-07-13 19:55:48.105548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600440522435306 len:5654 00:08:00.497 [2024-07-13 19:55:48.105567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.497 [2024-07-13 19:55:48.105635] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1519378740404360469 len:60139 00:08:00.497 [2024-07-13 19:55:48.105651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.497 [2024-07-13 19:55:48.105704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16927600444095261418 len:60139 00:08:00.497 [2024-07-13 19:55:48.105719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.497 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:00.497 #24 NEW cov: 12090 ft: 14214 corp: 15/355b lim: 50 exec/s: 0 rss: 70Mb L: 47/47 MS: 1 CrossOver- 00:08:00.497 [2024-07-13 19:55:48.155673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:169934848 len:1 00:08:00.497 [2024-07-13 19:55:48.155700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.497 [2024-07-13 19:55:48.155744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600440168868586 len:60651 00:08:00.497 [2024-07-13 19:55:48.155760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.497 [2024-07-13 19:55:48.155815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16927600444109941482 len:60139 00:08:00.497 [2024-07-13 19:55:48.155828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.497 [2024-07-13 19:55:48.155881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:3941264106 len:1 00:08:00.497 [2024-07-13 19:55:48.155898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.756 #25 NEW cov: 12090 ft: 14303 corp: 16/400b lim: 50 exec/s: 0 rss: 70Mb L: 45/47 MS: 1 CrossOver- 00:08:00.756 [2024-07-13 19:55:48.205584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600448807561962 len:60139 00:08:00.756 [2024-07-13 19:55:48.205612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.756 [2024-07-13 19:55:48.205672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941482 len:60139 00:08:00.756 [2024-07-13 19:55:48.205688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.756 #26 NEW cov: 12090 ft: 14310 corp: 17/424b lim: 50 exec/s: 26 rss: 70Mb L: 24/47 MS: 1 ChangeBit- 00:08:00.756 [2024-07-13 19:55:48.255644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2904974145093632 len:2065 00:08:00.756 [2024-07-13 19:55:48.255671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.756 #32 NEW cov: 12090 ft: 14374 corp: 18/439b lim: 50 exec/s: 32 rss: 70Mb L: 15/47 MS: 1 CMP- DE: "\012R\017\010\020\177\000\000"- 00:08:00.756 [2024-07-13 19:55:48.305876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:169934848 len:1 00:08:00.756 [2024-07-13 19:55:48.305903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.756 [2024-07-13 19:55:48.305968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:00.756 [2024-07-13 19:55:48.305986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.756 #33 NEW cov: 12090 ft: 14408 corp: 19/461b lim: 50 exec/s: 33 rss: 70Mb L: 22/47 MS: 1 ShuffleBytes- 00:08:00.756 [2024-07-13 19:55:48.345866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600440217627370 len:59905 00:08:00.756 [2024-07-13 19:55:48.345893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.756 #34 NEW cov: 12090 ft: 14417 corp: 20/473b lim: 50 exec/s: 34 rss: 70Mb L: 12/47 MS: 1 EraseBytes- 00:08:00.756 [2024-07-13 19:55:48.396192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600448807561962 len:60139 00:08:00.756 [2024-07-13 19:55:48.396220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.756 [2024-07-13 19:55:48.396269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941482 len:60139 00:08:00.756 [2024-07-13 19:55:48.396286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.015 #35 NEW cov: 12090 ft: 14437 corp: 21/497b lim: 50 exec/s: 35 rss: 70Mb L: 24/47 MS: 1 ShuffleBytes- 00:08:01.015 [2024-07-13 19:55:48.446299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600440202238186 len:60139 00:08:01.015 [2024-07-13 19:55:48.446326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.015 [2024-07-13 19:55:48.446392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941289 len:60139 00:08:01.015 [2024-07-13 19:55:48.446409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.015 #36 NEW cov: 12090 ft: 14504 corp: 22/521b lim: 50 exec/s: 36 rss: 70Mb L: 24/47 MS: 1 ChangeBinInt- 00:08:01.015 [2024-07-13 19:55:48.486400] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2904974145093632 len:2065 00:08:01.015 [2024-07-13 19:55:48.486425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.015 [2024-07-13 19:55:48.486489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2130706432 len:11 00:08:01.015 [2024-07-13 19:55:48.486506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.015 #37 NEW cov: 12090 ft: 14549 corp: 23/541b lim: 50 exec/s: 37 rss: 70Mb L: 20/47 MS: 1 InsertRepeatedBytes- 00:08:01.015 [2024-07-13 19:55:48.536482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600440217627370 len:60139 00:08:01.015 [2024-07-13 19:55:48.536509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.015 #38 NEW cov: 12090 ft: 14565 corp: 24/558b lim: 50 exec/s: 38 rss: 70Mb L: 17/47 MS: 1 EraseBytes- 00:08:01.015 [2024-07-13 19:55:48.576662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600440217627370 len:60139 00:08:01.015 [2024-07-13 19:55:48.576688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.015 [2024-07-13 19:55:48.576740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941289 len:32491 00:08:01.015 [2024-07-13 19:55:48.576756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.015 #39 NEW cov: 12090 ft: 14586 corp: 25/583b lim: 50 exec/s: 39 rss: 70Mb L: 25/47 MS: 1 InsertByte- 00:08:01.015 [2024-07-13 19:55:48.616757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600448807561962 len:60139 00:08:01.015 [2024-07-13 19:55:48.616783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.015 [2024-07-13 19:55:48.616835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941482 len:9451 00:08:01.015 [2024-07-13 19:55:48.616852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.015 #45 NEW cov: 12090 ft: 14601 corp: 26/608b lim: 50 exec/s: 45 rss: 70Mb L: 25/47 MS: 1 InsertByte- 00:08:01.015 [2024-07-13 19:55:48.656896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:169934848 len:1 00:08:01.015 [2024-07-13 19:55:48.656923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.015 [2024-07-13 19:55:48.656972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:562949953421312 len:1 00:08:01.015 [2024-07-13 19:55:48.656987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.275 #46 NEW cov: 12090 ft: 14626 corp: 27/630b lim: 50 exec/s: 46 rss: 70Mb L: 22/47 MS: 1 ChangeBinInt- 00:08:01.275 [2024-07-13 19:55:48.706990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600440217627370 len:60139 00:08:01.275 [2024-07-13 19:55:48.707015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.275 [2024-07-13 19:55:48.707066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941289 len:48363 00:08:01.275 [2024-07-13 19:55:48.707083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.275 #47 NEW cov: 12090 ft: 14645 corp: 28/654b lim: 50 exec/s: 47 rss: 70Mb L: 24/47 MS: 1 ChangeByte- 00:08:01.275 [2024-07-13 19:55:48.737120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13003375013217238824 len:23467 00:08:01.275 [2024-07-13 19:55:48.737147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.275 [2024-07-13 19:55:48.737204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:08:01.275 [2024-07-13 19:55:48.737221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.275 #48 NEW cov: 12090 ft: 14695 corp: 29/678b lim: 50 exec/s: 48 rss: 70Mb L: 24/47 MS: 1 ShuffleBytes- 00:08:01.275 [2024-07-13 19:55:48.787137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:26 len:1 00:08:01.275 [2024-07-13 19:55:48.787164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.275 #49 NEW cov: 12090 ft: 14707 corp: 30/695b lim: 50 exec/s: 49 rss: 70Mb L: 17/47 MS: 1 InsertByte- 00:08:01.275 [2024-07-13 19:55:48.827377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600440217627370 len:60139 00:08:01.275 [2024-07-13 19:55:48.827405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.275 [2024-07-13 19:55:48.827472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16927600444109941289 len:60139 00:08:01.275 [2024-07-13 19:55:48.827490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.275 #50 NEW cov: 12090 ft: 14711 corp: 31/719b lim: 50 exec/s: 50 rss: 70Mb L: 24/47 MS: 1 ShuffleBytes- 00:08:01.275 [2024-07-13 19:55:48.857335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:01.275 [2024-07-13 19:55:48.857361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.275 #51 NEW cov: 12090 ft: 14725 corp: 32/736b lim: 50 exec/s: 51 rss: 70Mb L: 17/47 MS: 1 InsertByte- 00:08:01.275 [2024-07-13 19:55:48.907495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:08:01.275 [2024-07-13 19:55:48.907522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.275 #52 NEW cov: 12090 ft: 14813 corp: 33/753b lim: 50 exec/s: 52 rss: 71Mb L: 17/47 MS: 1 InsertByte- 00:08:01.534 [2024-07-13 19:55:48.947708] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:576550912256903695 len:4097 00:08:01.534 [2024-07-13 19:55:48.947736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.534 [2024-07-13 19:55:48.947794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:2130706432 len:11 00:08:01.534 [2024-07-13 19:55:48.947809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.534 #53 NEW cov: 12090 ft: 14825 corp: 34/773b lim: 50 exec/s: 53 rss: 71Mb L: 20/47 MS: 1 ShuffleBytes- 00:08:01.534 [2024-07-13 19:55:48.997854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:2904974145093632 len:2065 00:08:01.534 [2024-07-13 19:55:48.997881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.534 [2024-07-13 19:55:48.997932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:67072340000768 len:11 00:08:01.534 [2024-07-13 19:55:48.997948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.534 #54 NEW cov: 12090 ft: 14836 corp: 35/793b lim: 50 exec/s: 54 rss: 71Mb L: 20/47 MS: 1 ChangeByte- 00:08:01.534 [2024-07-13 19:55:49.037840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600440217627370 len:60139 00:08:01.535 [2024-07-13 19:55:49.037866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.535 #55 NEW cov: 12090 ft: 14871 corp: 36/810b lim: 50 exec/s: 55 rss: 71Mb L: 17/47 MS: 1 ChangeByte- 00:08:01.535 [2024-07-13 19:55:49.088333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16927600440217627370 len:60139 00:08:01.535 [2024-07-13 19:55:49.088360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.535 [2024-07-13 19:55:49.088422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16933534598559361577 len:65536 00:08:01.535 [2024-07-13 19:55:49.088437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.535 [2024-07-13 19:55:49.088495] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:01.535 [2024-07-13 19:55:49.088510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.535 [2024-07-13 19:55:49.088564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65515 00:08:01.535 [2024-07-13 19:55:49.088579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.535 #61 NEW cov: 12090 ft: 14899 corp: 37/858b lim: 50 exec/s: 61 rss: 71Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:08:01.535 [2024-07-13 19:55:49.138475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:169934848 len:1 00:08:01.535 [2024-07-13 19:55:49.138502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.535 [2024-07-13 19:55:49.138550] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16925630115331893994 len:60651 00:08:01.535 [2024-07-13 19:55:49.138568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.535 [2024-07-13 19:55:49.138621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16927600444109941482 len:60139 00:08:01.535 [2024-07-13 19:55:49.138638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.535 [2024-07-13 19:55:49.138690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:3941264106 len:1 00:08:01.535 [2024-07-13 19:55:49.138707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.535 #62 NEW cov: 12090 ft: 14911 corp: 38/903b lim: 50 exec/s: 62 rss: 71Mb L: 45/48 MS: 1 ChangeBinInt- 00:08:01.535 [2024-07-13 19:55:49.188584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:169934848 len:1 00:08:01.535 [2024-07-13 19:55:49.188611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.535 [2024-07-13 19:55:49.188673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16925630115331893994 len:60417 00:08:01.535 [2024-07-13 19:55:49.188691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.535 [2024-07-13 19:55:49.188743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:1966611691718311938 len:60139 00:08:01.535 [2024-07-13 19:55:49.188757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.535 [2024-07-13 19:55:49.188810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:3941264106 len:1 00:08:01.535 [2024-07-13 19:55:49.188827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.794 #63 NEW cov: 12090 ft: 14913 corp: 39/948b lim: 50 exec/s: 31 rss: 71Mb L: 45/48 MS: 1 CMP- DE: "\000\000\000\000\002\033J\316"- 00:08:01.794 #63 DONE cov: 12090 ft: 14913 corp: 39/948b lim: 50 exec/s: 31 rss: 71Mb 00:08:01.794 ###### Recommended dictionary. ###### 00:08:01.794 "\377(\264uD\256[\252" # Uses: 0 00:08:01.794 "\012R\017\010\020\177\000\000" # Uses: 0 00:08:01.794 "\000\000\000\000\002\033J\316" # Uses: 0 00:08:01.794 ###### End of recommended dictionary. ###### 00:08:01.794 Done 63 runs in 2 second(s) 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4420 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:01.794 19:55:49 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:08:01.794 [2024-07-13 19:55:49.380631] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:01.794 [2024-07-13 19:55:49.380720] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3682516 ] 00:08:01.794 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.053 [2024-07-13 19:55:49.640079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.053 [2024-07-13 19:55:49.669944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.311 [2024-07-13 19:55:49.722272] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.311 [2024-07-13 19:55:49.738590] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:02.311 INFO: Running with entropic power schedule (0xFF, 100). 00:08:02.311 INFO: Seed: 1216355223 00:08:02.311 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:08:02.311 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:08:02.311 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:02.311 INFO: A corpus is not provided, starting from an empty corpus 00:08:02.311 #2 INITED exec/s: 0 rss: 62Mb 00:08:02.311 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:02.311 This may also happen if the target rejected all inputs we tried so far 00:08:02.311 [2024-07-13 19:55:49.786679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.311 [2024-07-13 19:55:49.786711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.570 NEW_FUNC[1/693]: 0x4b5c60 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:02.570 NEW_FUNC[2/693]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:02.570 #8 NEW cov: 11904 ft: 11904 corp: 2/29b lim: 90 exec/s: 0 rss: 69Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:08:02.570 [2024-07-13 19:55:50.117586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.570 [2024-07-13 19:55:50.117631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.570 #9 NEW cov: 12034 ft: 12494 corp: 3/57b lim: 90 exec/s: 0 rss: 69Mb L: 28/28 MS: 1 ChangeBinInt- 00:08:02.570 [2024-07-13 19:55:50.167626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.570 [2024-07-13 19:55:50.167653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.570 #10 NEW cov: 12040 ft: 12911 corp: 4/85b lim: 90 exec/s: 0 rss: 69Mb L: 28/28 MS: 1 ShuffleBytes- 00:08:02.570 [2024-07-13 19:55:50.207745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.570 [2024-07-13 19:55:50.207774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.828 #11 NEW cov: 12125 ft: 13155 corp: 5/117b lim: 90 exec/s: 0 rss: 69Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:08:02.828 [2024-07-13 19:55:50.257886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.828 [2024-07-13 19:55:50.257914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.828 #12 NEW cov: 12125 ft: 13206 corp: 6/145b lim: 90 exec/s: 0 rss: 69Mb L: 28/32 MS: 1 CopyPart- 00:08:02.828 [2024-07-13 19:55:50.298035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.828 [2024-07-13 19:55:50.298061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.828 #18 NEW cov: 12125 ft: 13273 corp: 7/170b lim: 90 exec/s: 0 rss: 69Mb L: 25/32 MS: 1 EraseBytes- 00:08:02.828 [2024-07-13 19:55:50.338117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.828 [2024-07-13 19:55:50.338143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.828 #19 NEW cov: 12125 ft: 13357 corp: 8/198b lim: 90 exec/s: 0 rss: 70Mb L: 28/32 MS: 1 CMP- DE: "\000\000\006\265\374\246\257+"- 00:08:02.828 [2024-07-13 19:55:50.388265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.828 [2024-07-13 19:55:50.388291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.828 #20 NEW cov: 12125 ft: 13377 corp: 9/226b lim: 90 exec/s: 0 rss: 70Mb L: 28/32 MS: 1 ChangeByte- 00:08:02.828 [2024-07-13 19:55:50.438405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.829 [2024-07-13 19:55:50.438431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.829 #21 NEW cov: 12125 ft: 13420 corp: 10/258b lim: 90 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 CopyPart- 00:08:02.829 [2024-07-13 19:55:50.488539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.829 [2024-07-13 19:55:50.488566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.088 #22 NEW cov: 12125 ft: 13521 corp: 11/290b lim: 90 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ChangeByte- 00:08:03.088 [2024-07-13 19:55:50.528626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.088 [2024-07-13 19:55:50.528651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.088 #23 NEW cov: 12125 ft: 13542 corp: 12/318b lim: 90 exec/s: 0 rss: 70Mb L: 28/32 MS: 1 CrossOver- 00:08:03.088 [2024-07-13 19:55:50.579260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.088 [2024-07-13 19:55:50.579287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.088 [2024-07-13 19:55:50.579327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:03.088 [2024-07-13 19:55:50.579341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.088 [2024-07-13 19:55:50.579395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:03.088 [2024-07-13 19:55:50.579408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.088 [2024-07-13 19:55:50.579482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:03.088 [2024-07-13 19:55:50.579497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.088 #24 NEW cov: 12125 ft: 14496 corp: 13/398b lim: 90 exec/s: 0 rss: 70Mb L: 80/80 MS: 1 InsertRepeatedBytes- 00:08:03.088 [2024-07-13 19:55:50.618928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.088 [2024-07-13 19:55:50.618954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.088 #29 NEW cov: 12125 ft: 14527 corp: 14/416b lim: 90 exec/s: 0 rss: 70Mb L: 18/80 MS: 5 PersAutoDict-CMP-ChangeBinInt-EraseBytes-CopyPart- DE: "\000\000\006\265\374\246\257+"-"\347\003\000\000\000\000\000\000"- 00:08:03.088 [2024-07-13 19:55:50.658985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.088 [2024-07-13 19:55:50.659013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.088 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:03.088 #30 NEW cov: 12148 ft: 14595 corp: 15/445b lim: 90 exec/s: 0 rss: 70Mb L: 29/80 MS: 1 InsertByte- 00:08:03.088 [2024-07-13 19:55:50.699636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.088 [2024-07-13 19:55:50.699664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.088 [2024-07-13 19:55:50.699706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:03.088 [2024-07-13 19:55:50.699721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.088 [2024-07-13 19:55:50.699777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:03.088 [2024-07-13 19:55:50.699793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.088 [2024-07-13 19:55:50.699848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:03.088 [2024-07-13 19:55:50.699866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.088 #31 NEW cov: 12148 ft: 14633 corp: 16/525b lim: 90 exec/s: 0 rss: 70Mb L: 80/80 MS: 1 ChangeBit- 00:08:03.346 [2024-07-13 19:55:50.749303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.346 [2024-07-13 19:55:50.749333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.346 #32 NEW cov: 12148 ft: 14694 corp: 17/557b lim: 90 exec/s: 32 rss: 70Mb L: 32/80 MS: 1 CMP- DE: "\000\000\000\006"- 00:08:03.346 [2024-07-13 19:55:50.799462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.346 [2024-07-13 19:55:50.799491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.346 #33 NEW cov: 12148 ft: 14727 corp: 18/585b lim: 90 exec/s: 33 rss: 70Mb L: 28/80 MS: 1 ChangeByte- 00:08:03.346 [2024-07-13 19:55:50.839545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.346 [2024-07-13 19:55:50.839572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.346 #34 NEW cov: 12148 ft: 14732 corp: 19/617b lim: 90 exec/s: 34 rss: 70Mb L: 32/80 MS: 1 ChangeBinInt- 00:08:03.346 [2024-07-13 19:55:50.879650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.346 [2024-07-13 19:55:50.879679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.346 #35 NEW cov: 12148 ft: 14765 corp: 20/646b lim: 90 exec/s: 35 rss: 70Mb L: 29/80 MS: 1 EraseBytes- 00:08:03.346 [2024-07-13 19:55:50.929808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.346 [2024-07-13 19:55:50.929834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.346 #36 NEW cov: 12148 ft: 14788 corp: 21/674b lim: 90 exec/s: 36 rss: 70Mb L: 28/80 MS: 1 ChangeBit- 00:08:03.346 [2024-07-13 19:55:50.969933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.346 [2024-07-13 19:55:50.969961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.346 #37 NEW cov: 12148 ft: 14796 corp: 22/694b lim: 90 exec/s: 37 rss: 70Mb L: 20/80 MS: 1 EraseBytes- 00:08:03.605 [2024-07-13 19:55:51.010337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.605 [2024-07-13 19:55:51.010364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.605 [2024-07-13 19:55:51.010408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:03.605 [2024-07-13 19:55:51.010424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.605 [2024-07-13 19:55:51.010478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:03.605 [2024-07-13 19:55:51.010495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.605 #38 NEW cov: 12148 ft: 15125 corp: 23/751b lim: 90 exec/s: 38 rss: 70Mb L: 57/80 MS: 1 InsertRepeatedBytes- 00:08:03.605 [2024-07-13 19:55:51.050161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.605 [2024-07-13 19:55:51.050188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.605 #39 NEW cov: 12148 ft: 15134 corp: 24/769b lim: 90 exec/s: 39 rss: 70Mb L: 18/80 MS: 1 EraseBytes- 00:08:03.605 [2024-07-13 19:55:51.100334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.605 [2024-07-13 19:55:51.100362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.605 #40 NEW cov: 12148 ft: 15211 corp: 25/797b lim: 90 exec/s: 40 rss: 70Mb L: 28/80 MS: 1 ChangeByte- 00:08:03.605 [2024-07-13 19:55:51.150454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.605 [2024-07-13 19:55:51.150480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.605 #41 NEW cov: 12148 ft: 15259 corp: 26/825b lim: 90 exec/s: 41 rss: 70Mb L: 28/80 MS: 1 ChangeBinInt- 00:08:03.605 [2024-07-13 19:55:51.200609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.605 [2024-07-13 19:55:51.200636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.605 #42 NEW cov: 12148 ft: 15267 corp: 27/857b lim: 90 exec/s: 42 rss: 70Mb L: 32/80 MS: 1 CopyPart- 00:08:03.605 [2024-07-13 19:55:51.240840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.605 [2024-07-13 19:55:51.240866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.605 [2024-07-13 19:55:51.240905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:03.605 [2024-07-13 19:55:51.240923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.605 #43 NEW cov: 12148 ft: 15554 corp: 28/893b lim: 90 exec/s: 43 rss: 70Mb L: 36/80 MS: 1 CopyPart- 00:08:03.863 [2024-07-13 19:55:51.281043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.863 [2024-07-13 19:55:51.281069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.863 [2024-07-13 19:55:51.281123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:03.863 [2024-07-13 19:55:51.281139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.863 #44 NEW cov: 12148 ft: 15562 corp: 29/944b lim: 90 exec/s: 44 rss: 70Mb L: 51/80 MS: 1 CopyPart- 00:08:03.863 [2024-07-13 19:55:51.320935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.863 [2024-07-13 19:55:51.320962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.863 #50 NEW cov: 12148 ft: 15576 corp: 30/977b lim: 90 exec/s: 50 rss: 70Mb L: 33/80 MS: 1 InsertByte- 00:08:03.863 [2024-07-13 19:55:51.371087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.863 [2024-07-13 19:55:51.371113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.863 #51 NEW cov: 12148 ft: 15608 corp: 31/1005b lim: 90 exec/s: 51 rss: 70Mb L: 28/80 MS: 1 ShuffleBytes- 00:08:03.863 [2024-07-13 19:55:51.411179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.863 [2024-07-13 19:55:51.411205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.863 #52 NEW cov: 12148 ft: 15640 corp: 32/1033b lim: 90 exec/s: 52 rss: 70Mb L: 28/80 MS: 1 PersAutoDict- DE: "\000\000\000\006"- 00:08:03.863 [2024-07-13 19:55:51.451281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.863 [2024-07-13 19:55:51.451307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.863 #53 NEW cov: 12148 ft: 15655 corp: 33/1053b lim: 90 exec/s: 53 rss: 70Mb L: 20/80 MS: 1 CMP- DE: "\001)\264r\010y\326\030"- 00:08:03.863 [2024-07-13 19:55:51.501454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.863 [2024-07-13 19:55:51.501481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.121 #54 NEW cov: 12148 ft: 15661 corp: 34/1071b lim: 90 exec/s: 54 rss: 70Mb L: 18/80 MS: 1 ChangeByte- 00:08:04.121 [2024-07-13 19:55:51.551602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:04.121 [2024-07-13 19:55:51.551629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.121 #55 NEW cov: 12148 ft: 15678 corp: 35/1099b lim: 90 exec/s: 55 rss: 70Mb L: 28/80 MS: 1 CopyPart- 00:08:04.121 [2024-07-13 19:55:51.591989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:04.121 [2024-07-13 19:55:51.592016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.121 [2024-07-13 19:55:51.592079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:04.121 [2024-07-13 19:55:51.592096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.121 [2024-07-13 19:55:51.592153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:04.121 [2024-07-13 19:55:51.592170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.121 #56 NEW cov: 12148 ft: 15683 corp: 36/1166b lim: 90 exec/s: 56 rss: 70Mb L: 67/80 MS: 1 EraseBytes- 00:08:04.121 [2024-07-13 19:55:51.642264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:04.121 [2024-07-13 19:55:51.642290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.121 [2024-07-13 19:55:51.642355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:04.121 [2024-07-13 19:55:51.642371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.121 [2024-07-13 19:55:51.642428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:04.121 [2024-07-13 19:55:51.642448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.121 [2024-07-13 19:55:51.642504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:04.121 [2024-07-13 19:55:51.642519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.121 #57 NEW cov: 12148 ft: 15701 corp: 37/1246b lim: 90 exec/s: 57 rss: 70Mb L: 80/80 MS: 1 ChangeBit- 00:08:04.121 [2024-07-13 19:55:51.682220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:04.121 [2024-07-13 19:55:51.682246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.121 [2024-07-13 19:55:51.682305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:04.121 [2024-07-13 19:55:51.682320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.121 [2024-07-13 19:55:51.682379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:04.121 [2024-07-13 19:55:51.682393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.121 #58 NEW cov: 12148 ft: 15702 corp: 38/1302b lim: 90 exec/s: 58 rss: 71Mb L: 56/80 MS: 1 EraseBytes- 00:08:04.121 [2024-07-13 19:55:51.732094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:04.121 [2024-07-13 19:55:51.732121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.121 #59 NEW cov: 12148 ft: 15708 corp: 39/1331b lim: 90 exec/s: 59 rss: 71Mb L: 29/80 MS: 1 ChangeByte- 00:08:04.380 [2024-07-13 19:55:51.782209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:04.380 [2024-07-13 19:55:51.782236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.380 #60 NEW cov: 12148 ft: 15713 corp: 40/1351b lim: 90 exec/s: 30 rss: 71Mb L: 20/80 MS: 1 ChangeByte- 00:08:04.380 #60 DONE cov: 12148 ft: 15713 corp: 40/1351b lim: 90 exec/s: 30 rss: 71Mb 00:08:04.380 ###### Recommended dictionary. ###### 00:08:04.380 "\000\000\006\265\374\246\257+" # Uses: 1 00:08:04.380 "\347\003\000\000\000\000\000\000" # Uses: 0 00:08:04.380 "\000\000\000\006" # Uses: 1 00:08:04.380 "\001)\264r\010y\326\030" # Uses: 0 00:08:04.380 ###### End of recommended dictionary. ###### 00:08:04.380 Done 60 runs in 2 second(s) 00:08:04.380 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:08:04.380 19:55:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:04.380 19:55:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:04.380 19:55:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:04.380 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:04.380 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:04.380 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:04.380 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:04.380 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:04.380 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:04.380 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:04.380 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:08:04.381 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4421 00:08:04.381 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:04.381 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:04.381 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:04.381 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:04.381 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:04.381 19:55:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:08:04.381 [2024-07-13 19:55:51.961652] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:04.381 [2024-07-13 19:55:51.961725] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3682985 ] 00:08:04.381 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.639 [2024-07-13 19:55:52.219198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.639 [2024-07-13 19:55:52.245725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.639 [2024-07-13 19:55:52.298087] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.898 [2024-07-13 19:55:52.314250] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:04.898 INFO: Running with entropic power schedule (0xFF, 100). 00:08:04.898 INFO: Seed: 3793355640 00:08:04.898 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:08:04.898 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:08:04.898 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:04.898 INFO: A corpus is not provided, starting from an empty corpus 00:08:04.898 #2 INITED exec/s: 0 rss: 62Mb 00:08:04.898 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:04.898 This may also happen if the target rejected all inputs we tried so far 00:08:04.898 [2024-07-13 19:55:52.382192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.898 [2024-07-13 19:55:52.382234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.898 [2024-07-13 19:55:52.382311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.898 [2024-07-13 19:55:52.382329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.898 [2024-07-13 19:55:52.382403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:04.898 [2024-07-13 19:55:52.382424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.898 [2024-07-13 19:55:52.382502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:04.898 [2024-07-13 19:55:52.382520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.898 [2024-07-13 19:55:52.382589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:04.898 [2024-07-13 19:55:52.382610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.157 NEW_FUNC[1/689]: 0x4b8e80 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:05.157 NEW_FUNC[2/689]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:05.157 #4 NEW cov: 11841 ft: 11877 corp: 2/51b lim: 50 exec/s: 0 rss: 69Mb L: 50/50 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:05.157 [2024-07-13 19:55:52.732076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.157 [2024-07-13 19:55:52.732123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.157 [2024-07-13 19:55:52.732272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.157 [2024-07-13 19:55:52.732300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.157 [2024-07-13 19:55:52.732433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.157 [2024-07-13 19:55:52.732463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.157 [2024-07-13 19:55:52.732597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.157 [2024-07-13 19:55:52.732625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.157 NEW_FUNC[1/4]: 0xfa4bd0 in rte_rdtsc /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/rte_cycles.h:25 00:08:05.157 NEW_FUNC[2/4]: 0x134c500 in nvmf_transport_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:727 00:08:05.157 #6 NEW cov: 12009 ft: 12730 corp: 3/93b lim: 50 exec/s: 0 rss: 69Mb L: 42/50 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:05.157 [2024-07-13 19:55:52.791670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.157 [2024-07-13 19:55:52.791705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.157 [2024-07-13 19:55:52.791838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.157 [2024-07-13 19:55:52.791860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.416 #7 NEW cov: 12015 ft: 13379 corp: 4/116b lim: 50 exec/s: 0 rss: 69Mb L: 23/50 MS: 1 EraseBytes- 00:08:05.416 [2024-07-13 19:55:52.852723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.416 [2024-07-13 19:55:52.852757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.416 [2024-07-13 19:55:52.852859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.416 [2024-07-13 19:55:52.852889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.416 [2024-07-13 19:55:52.853030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.416 [2024-07-13 19:55:52.853057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.416 [2024-07-13 19:55:52.853187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.416 [2024-07-13 19:55:52.853214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.416 [2024-07-13 19:55:52.853351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:05.416 [2024-07-13 19:55:52.853378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.416 #8 NEW cov: 12100 ft: 13705 corp: 5/166b lim: 50 exec/s: 0 rss: 69Mb L: 50/50 MS: 1 ChangeBit- 00:08:05.416 [2024-07-13 19:55:52.912069] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.416 [2024-07-13 19:55:52.912097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.416 [2024-07-13 19:55:52.912238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.416 [2024-07-13 19:55:52.912261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.416 #9 NEW cov: 12100 ft: 13798 corp: 6/190b lim: 50 exec/s: 0 rss: 69Mb L: 24/50 MS: 1 CrossOver- 00:08:05.416 [2024-07-13 19:55:52.972274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.416 [2024-07-13 19:55:52.972307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.416 [2024-07-13 19:55:52.972446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.416 [2024-07-13 19:55:52.972464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.416 #10 NEW cov: 12100 ft: 13847 corp: 7/215b lim: 50 exec/s: 0 rss: 70Mb L: 25/50 MS: 1 InsertByte- 00:08:05.416 [2024-07-13 19:55:53.033345] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.416 [2024-07-13 19:55:53.033379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.416 [2024-07-13 19:55:53.033480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.416 [2024-07-13 19:55:53.033504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.416 [2024-07-13 19:55:53.033638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.416 [2024-07-13 19:55:53.033660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.416 [2024-07-13 19:55:53.033800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.416 [2024-07-13 19:55:53.033821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.416 [2024-07-13 19:55:53.033955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:05.416 [2024-07-13 19:55:53.033976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.416 #16 NEW cov: 12100 ft: 13962 corp: 8/265b lim: 50 exec/s: 0 rss: 70Mb L: 50/50 MS: 1 CopyPart- 00:08:05.674 [2024-07-13 19:55:53.093469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.674 [2024-07-13 19:55:53.093500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.674 [2024-07-13 19:55:53.093614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.674 [2024-07-13 19:55:53.093637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.674 [2024-07-13 19:55:53.093770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.674 [2024-07-13 19:55:53.093799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.674 [2024-07-13 19:55:53.093932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.674 [2024-07-13 19:55:53.093958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.674 [2024-07-13 19:55:53.094096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:05.674 [2024-07-13 19:55:53.094125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.674 #17 NEW cov: 12100 ft: 14043 corp: 9/315b lim: 50 exec/s: 0 rss: 70Mb L: 50/50 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:05.674 [2024-07-13 19:55:53.143538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.674 [2024-07-13 19:55:53.143571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.674 [2024-07-13 19:55:53.143670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.674 [2024-07-13 19:55:53.143695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.674 [2024-07-13 19:55:53.143835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.674 [2024-07-13 19:55:53.143859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.674 [2024-07-13 19:55:53.144003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.674 [2024-07-13 19:55:53.144027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.674 [2024-07-13 19:55:53.144166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:05.674 [2024-07-13 19:55:53.144188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.674 #18 NEW cov: 12100 ft: 14137 corp: 10/365b lim: 50 exec/s: 0 rss: 70Mb L: 50/50 MS: 1 ChangeByte- 00:08:05.674 [2024-07-13 19:55:53.203818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.674 [2024-07-13 19:55:53.203854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.674 [2024-07-13 19:55:53.203952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.674 [2024-07-13 19:55:53.203982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.674 [2024-07-13 19:55:53.204125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.674 [2024-07-13 19:55:53.204152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.674 [2024-07-13 19:55:53.204296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.674 [2024-07-13 19:55:53.204324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.674 [2024-07-13 19:55:53.204472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:05.674 [2024-07-13 19:55:53.204497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.674 #19 NEW cov: 12100 ft: 14221 corp: 11/415b lim: 50 exec/s: 0 rss: 70Mb L: 50/50 MS: 1 CrossOver- 00:08:05.674 [2024-07-13 19:55:53.263197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.674 [2024-07-13 19:55:53.263232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.675 [2024-07-13 19:55:53.263374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.675 [2024-07-13 19:55:53.263398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.675 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:05.675 #20 NEW cov: 12123 ft: 14281 corp: 12/443b lim: 50 exec/s: 0 rss: 70Mb L: 28/50 MS: 1 CrossOver- 00:08:05.675 [2024-07-13 19:55:53.324157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.675 [2024-07-13 19:55:53.324190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.675 [2024-07-13 19:55:53.324291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.675 [2024-07-13 19:55:53.324316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.675 [2024-07-13 19:55:53.324447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.675 [2024-07-13 19:55:53.324482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.675 [2024-07-13 19:55:53.324607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.675 [2024-07-13 19:55:53.324629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.675 [2024-07-13 19:55:53.324762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:05.675 [2024-07-13 19:55:53.324783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.933 #21 NEW cov: 12123 ft: 14303 corp: 13/493b lim: 50 exec/s: 0 rss: 70Mb L: 50/50 MS: 1 CopyPart- 00:08:05.933 [2024-07-13 19:55:53.374382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.933 [2024-07-13 19:55:53.374416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.374523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.933 [2024-07-13 19:55:53.374547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.374684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.933 [2024-07-13 19:55:53.374709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.374836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.933 [2024-07-13 19:55:53.374858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.374993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:05.933 [2024-07-13 19:55:53.375019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.933 #22 NEW cov: 12123 ft: 14340 corp: 14/543b lim: 50 exec/s: 22 rss: 70Mb L: 50/50 MS: 1 CopyPart- 00:08:05.933 [2024-07-13 19:55:53.424469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.933 [2024-07-13 19:55:53.424502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.424593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.933 [2024-07-13 19:55:53.424621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.424759] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.933 [2024-07-13 19:55:53.424784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.424928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.933 [2024-07-13 19:55:53.424951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.425090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:05.933 [2024-07-13 19:55:53.425118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.933 #23 NEW cov: 12123 ft: 14360 corp: 15/593b lim: 50 exec/s: 23 rss: 70Mb L: 50/50 MS: 1 ChangeBinInt- 00:08:05.933 [2024-07-13 19:55:53.474607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.933 [2024-07-13 19:55:53.474640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.474750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.933 [2024-07-13 19:55:53.474779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.474916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.933 [2024-07-13 19:55:53.474945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.475093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.933 [2024-07-13 19:55:53.475113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.475258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:05.933 [2024-07-13 19:55:53.475286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.933 #24 NEW cov: 12123 ft: 14384 corp: 16/643b lim: 50 exec/s: 24 rss: 70Mb L: 50/50 MS: 1 ChangeBinInt- 00:08:05.933 [2024-07-13 19:55:53.524823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.933 [2024-07-13 19:55:53.524857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.524956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.933 [2024-07-13 19:55:53.524981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.525114] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.933 [2024-07-13 19:55:53.525139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.525278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.933 [2024-07-13 19:55:53.525307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.525453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:05.933 [2024-07-13 19:55:53.525481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:05.933 #25 NEW cov: 12123 ft: 14405 corp: 17/693b lim: 50 exec/s: 25 rss: 70Mb L: 50/50 MS: 1 ChangeBit- 00:08:05.933 [2024-07-13 19:55:53.584234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.933 [2024-07-13 19:55:53.584263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.933 [2024-07-13 19:55:53.584404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.933 [2024-07-13 19:55:53.584428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.191 #26 NEW cov: 12123 ft: 14415 corp: 18/719b lim: 50 exec/s: 26 rss: 70Mb L: 26/50 MS: 1 InsertByte- 00:08:06.191 [2024-07-13 19:55:53.635167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:06.191 [2024-07-13 19:55:53.635199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.191 [2024-07-13 19:55:53.635305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:06.191 [2024-07-13 19:55:53.635333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.191 [2024-07-13 19:55:53.635475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:06.191 [2024-07-13 19:55:53.635500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.191 [2024-07-13 19:55:53.635628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:06.191 [2024-07-13 19:55:53.635653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.191 [2024-07-13 19:55:53.635782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:06.191 [2024-07-13 19:55:53.635806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:06.191 #27 NEW cov: 12123 ft: 14477 corp: 19/769b lim: 50 exec/s: 27 rss: 70Mb L: 50/50 MS: 1 ChangeBinInt- 00:08:06.191 [2024-07-13 19:55:53.694587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:06.191 [2024-07-13 19:55:53.694624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.191 [2024-07-13 19:55:53.694750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:06.191 [2024-07-13 19:55:53.694775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.191 #28 NEW cov: 12123 ft: 14529 corp: 20/793b lim: 50 exec/s: 28 rss: 70Mb L: 24/50 MS: 1 ChangeByte- 00:08:06.191 [2024-07-13 19:55:53.745572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:06.191 [2024-07-13 19:55:53.745604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.191 [2024-07-13 19:55:53.745706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:06.191 [2024-07-13 19:55:53.745732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.191 [2024-07-13 19:55:53.745869] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:06.191 [2024-07-13 19:55:53.745895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.191 [2024-07-13 19:55:53.746032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:06.191 [2024-07-13 19:55:53.746056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.191 [2024-07-13 19:55:53.746196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:06.191 [2024-07-13 19:55:53.746227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:06.191 #29 NEW cov: 12123 ft: 14537 corp: 21/843b lim: 50 exec/s: 29 rss: 70Mb L: 50/50 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:06.192 [2024-07-13 19:55:53.795751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:06.192 [2024-07-13 19:55:53.795785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.192 [2024-07-13 19:55:53.795893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:06.192 [2024-07-13 19:55:53.795924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.192 [2024-07-13 19:55:53.796058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:06.192 [2024-07-13 19:55:53.796084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.192 [2024-07-13 19:55:53.796235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:06.192 [2024-07-13 19:55:53.796266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.192 [2024-07-13 19:55:53.796413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:06.192 [2024-07-13 19:55:53.796438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:06.192 #30 NEW cov: 12123 ft: 14560 corp: 22/893b lim: 50 exec/s: 30 rss: 70Mb L: 50/50 MS: 1 ChangeASCIIInt- 00:08:06.192 [2024-07-13 19:55:53.845632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:06.192 [2024-07-13 19:55:53.845664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.192 [2024-07-13 19:55:53.845783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:06.192 [2024-07-13 19:55:53.845810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.192 [2024-07-13 19:55:53.845957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:06.192 [2024-07-13 19:55:53.845986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.192 [2024-07-13 19:55:53.846123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:06.192 [2024-07-13 19:55:53.846147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.450 #31 NEW cov: 12123 ft: 14586 corp: 23/940b lim: 50 exec/s: 31 rss: 70Mb L: 47/50 MS: 1 CopyPart- 00:08:06.450 [2024-07-13 19:55:53.895737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:06.450 [2024-07-13 19:55:53.895768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.450 [2024-07-13 19:55:53.895872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:06.450 [2024-07-13 19:55:53.895895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.450 [2024-07-13 19:55:53.896038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:06.450 [2024-07-13 19:55:53.896064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.450 [2024-07-13 19:55:53.896202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:06.450 [2024-07-13 19:55:53.896224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.450 #32 NEW cov: 12123 ft: 14597 corp: 24/988b lim: 50 exec/s: 32 rss: 70Mb L: 48/50 MS: 1 EraseBytes- 00:08:06.450 [2024-07-13 19:55:53.955660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:06.450 [2024-07-13 19:55:53.955694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.450 [2024-07-13 19:55:53.955799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:06.450 [2024-07-13 19:55:53.955827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.450 [2024-07-13 19:55:53.955974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:06.450 [2024-07-13 19:55:53.955998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.450 #33 NEW cov: 12123 ft: 14845 corp: 25/1024b lim: 50 exec/s: 33 rss: 70Mb L: 36/50 MS: 1 CrossOver- 00:08:06.450 [2024-07-13 19:55:54.016399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:06.450 [2024-07-13 19:55:54.016433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.450 [2024-07-13 19:55:54.016541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:06.450 [2024-07-13 19:55:54.016566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.450 [2024-07-13 19:55:54.016696] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:06.450 [2024-07-13 19:55:54.016723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.450 [2024-07-13 19:55:54.016866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:06.450 [2024-07-13 19:55:54.016887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.450 [2024-07-13 19:55:54.017020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:06.450 [2024-07-13 19:55:54.017043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:06.450 #34 NEW cov: 12123 ft: 14853 corp: 26/1074b lim: 50 exec/s: 34 rss: 70Mb L: 50/50 MS: 1 ChangeBit- 00:08:06.450 [2024-07-13 19:55:54.086615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:06.450 [2024-07-13 19:55:54.086652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.450 [2024-07-13 19:55:54.086760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:06.450 [2024-07-13 19:55:54.086782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.450 [2024-07-13 19:55:54.086922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:06.450 [2024-07-13 19:55:54.086945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.450 [2024-07-13 19:55:54.087087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:06.450 [2024-07-13 19:55:54.087109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.450 [2024-07-13 19:55:54.087245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:06.450 [2024-07-13 19:55:54.087271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:06.450 #40 NEW cov: 12123 ft: 14884 corp: 27/1124b lim: 50 exec/s: 40 rss: 70Mb L: 50/50 MS: 1 CopyPart- 00:08:06.709 [2024-07-13 19:55:54.136743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:06.709 [2024-07-13 19:55:54.136777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.709 [2024-07-13 19:55:54.136879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:06.709 [2024-07-13 19:55:54.136905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.709 [2024-07-13 19:55:54.137037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:06.709 [2024-07-13 19:55:54.137073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.709 [2024-07-13 19:55:54.137209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:06.709 [2024-07-13 19:55:54.137238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.709 [2024-07-13 19:55:54.137376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:06.709 [2024-07-13 19:55:54.137404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:06.709 #41 NEW cov: 12123 ft: 14892 corp: 28/1174b lim: 50 exec/s: 41 rss: 70Mb L: 50/50 MS: 1 ShuffleBytes- 00:08:06.709 [2024-07-13 19:55:54.186655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:06.709 [2024-07-13 19:55:54.186685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.709 [2024-07-13 19:55:54.186790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:06.709 [2024-07-13 19:55:54.186817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.709 [2024-07-13 19:55:54.186956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:06.709 [2024-07-13 19:55:54.186986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.709 [2024-07-13 19:55:54.187127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:06.709 [2024-07-13 19:55:54.187152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.709 #42 NEW cov: 12123 ft: 14960 corp: 29/1222b lim: 50 exec/s: 42 rss: 70Mb L: 48/50 MS: 1 InsertRepeatedBytes- 00:08:06.709 [2024-07-13 19:55:54.236232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:06.709 [2024-07-13 19:55:54.236261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.709 [2024-07-13 19:55:54.236398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:06.709 [2024-07-13 19:55:54.236423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.709 #43 NEW cov: 12123 ft: 14996 corp: 30/1248b lim: 50 exec/s: 43 rss: 70Mb L: 26/50 MS: 1 ChangeBit- 00:08:06.709 [2024-07-13 19:55:54.297217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:06.709 [2024-07-13 19:55:54.297248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.709 [2024-07-13 19:55:54.297357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:06.709 [2024-07-13 19:55:54.297385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.709 [2024-07-13 19:55:54.297536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:06.709 [2024-07-13 19:55:54.297559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.709 [2024-07-13 19:55:54.297683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:06.709 [2024-07-13 19:55:54.297705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.709 [2024-07-13 19:55:54.297840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:08:06.709 [2024-07-13 19:55:54.297861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:06.709 #44 NEW cov: 12123 ft: 15073 corp: 31/1298b lim: 50 exec/s: 44 rss: 70Mb L: 50/50 MS: 1 ShuffleBytes- 00:08:06.709 [2024-07-13 19:55:54.357126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:06.709 [2024-07-13 19:55:54.357159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.709 [2024-07-13 19:55:54.357280] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:06.709 [2024-07-13 19:55:54.357308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.709 [2024-07-13 19:55:54.357434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:06.709 [2024-07-13 19:55:54.357459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:06.709 [2024-07-13 19:55:54.357603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:06.709 [2024-07-13 19:55:54.357626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:06.968 #45 NEW cov: 12123 ft: 15092 corp: 32/1346b lim: 50 exec/s: 22 rss: 70Mb L: 48/50 MS: 1 CopyPart- 00:08:06.968 #45 DONE cov: 12123 ft: 15092 corp: 32/1346b lim: 50 exec/s: 22 rss: 70Mb 00:08:06.968 ###### Recommended dictionary. ###### 00:08:06.968 "\377\377\377\377\377\377\377\377" # Uses: 1 00:08:06.968 ###### End of recommended dictionary. ###### 00:08:06.968 Done 45 runs in 2 second(s) 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4422 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:06.968 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:06.969 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:06.969 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:06.969 19:55:54 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:08:06.969 [2024-07-13 19:55:54.546417] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:06.969 [2024-07-13 19:55:54.546505] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3683518 ] 00:08:06.969 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.227 [2024-07-13 19:55:54.799731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.227 [2024-07-13 19:55:54.831050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.227 [2024-07-13 19:55:54.883554] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.485 [2024-07-13 19:55:54.899841] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:07.485 INFO: Running with entropic power schedule (0xFF, 100). 00:08:07.485 INFO: Seed: 2085384842 00:08:07.485 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:08:07.485 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:08:07.485 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:07.485 INFO: A corpus is not provided, starting from an empty corpus 00:08:07.485 #2 INITED exec/s: 0 rss: 63Mb 00:08:07.485 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:07.485 This may also happen if the target rejected all inputs we tried so far 00:08:07.485 [2024-07-13 19:55:54.976911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.485 [2024-07-13 19:55:54.976955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.485 [2024-07-13 19:55:54.977032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.485 [2024-07-13 19:55:54.977051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.485 [2024-07-13 19:55:54.977126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:07.485 [2024-07-13 19:55:54.977144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.750 NEW_FUNC[1/693]: 0x4bb140 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:07.750 NEW_FUNC[2/693]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:07.750 #10 NEW cov: 11905 ft: 11906 corp: 2/56b lim: 85 exec/s: 0 rss: 70Mb L: 55/55 MS: 3 InsertByte-ChangeBit-InsertRepeatedBytes- 00:08:07.750 [2024-07-13 19:55:55.307101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.750 [2024-07-13 19:55:55.307156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.750 [2024-07-13 19:55:55.307298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.750 [2024-07-13 19:55:55.307323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.750 [2024-07-13 19:55:55.307463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:07.750 [2024-07-13 19:55:55.307493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.750 #16 NEW cov: 12035 ft: 12539 corp: 3/111b lim: 85 exec/s: 0 rss: 70Mb L: 55/55 MS: 1 ChangeByte- 00:08:07.750 [2024-07-13 19:55:55.367100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.750 [2024-07-13 19:55:55.367133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.750 [2024-07-13 19:55:55.367260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.750 [2024-07-13 19:55:55.367283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.750 [2024-07-13 19:55:55.367412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:07.750 [2024-07-13 19:55:55.367438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:07.750 #17 NEW cov: 12041 ft: 12826 corp: 4/170b lim: 85 exec/s: 0 rss: 70Mb L: 59/59 MS: 1 InsertRepeatedBytes- 00:08:07.750 [2024-07-13 19:55:55.407256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.750 [2024-07-13 19:55:55.407291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.750 [2024-07-13 19:55:55.407414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.750 [2024-07-13 19:55:55.407447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.750 [2024-07-13 19:55:55.407584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:07.750 [2024-07-13 19:55:55.407607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.009 #19 NEW cov: 12126 ft: 13246 corp: 5/231b lim: 85 exec/s: 0 rss: 70Mb L: 61/61 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:08:08.009 [2024-07-13 19:55:55.447294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.009 [2024-07-13 19:55:55.447330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.009 [2024-07-13 19:55:55.447470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.009 [2024-07-13 19:55:55.447492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.009 [2024-07-13 19:55:55.447615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.009 [2024-07-13 19:55:55.447638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.009 #20 NEW cov: 12126 ft: 13313 corp: 6/283b lim: 85 exec/s: 0 rss: 70Mb L: 52/61 MS: 1 CrossOver- 00:08:08.009 [2024-07-13 19:55:55.497464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.009 [2024-07-13 19:55:55.497495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.009 [2024-07-13 19:55:55.497627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.009 [2024-07-13 19:55:55.497653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.009 [2024-07-13 19:55:55.497784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.009 [2024-07-13 19:55:55.497810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.009 #21 NEW cov: 12126 ft: 13423 corp: 7/343b lim: 85 exec/s: 0 rss: 70Mb L: 60/61 MS: 1 InsertByte- 00:08:08.009 [2024-07-13 19:55:55.547555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.009 [2024-07-13 19:55:55.547592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.009 [2024-07-13 19:55:55.547722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.009 [2024-07-13 19:55:55.547744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.009 [2024-07-13 19:55:55.547878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.009 [2024-07-13 19:55:55.547903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.009 #27 NEW cov: 12126 ft: 13527 corp: 8/398b lim: 85 exec/s: 0 rss: 70Mb L: 55/61 MS: 1 ChangeByte- 00:08:08.009 [2024-07-13 19:55:55.587665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.009 [2024-07-13 19:55:55.587697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.009 [2024-07-13 19:55:55.587814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.009 [2024-07-13 19:55:55.587841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.009 [2024-07-13 19:55:55.587959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.009 [2024-07-13 19:55:55.587994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.009 #33 NEW cov: 12126 ft: 13580 corp: 9/454b lim: 85 exec/s: 0 rss: 70Mb L: 56/61 MS: 1 InsertByte- 00:08:08.009 [2024-07-13 19:55:55.637886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.009 [2024-07-13 19:55:55.637920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.009 [2024-07-13 19:55:55.638042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.009 [2024-07-13 19:55:55.638062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.009 [2024-07-13 19:55:55.638179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.009 [2024-07-13 19:55:55.638202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.009 #34 NEW cov: 12126 ft: 13602 corp: 10/513b lim: 85 exec/s: 0 rss: 70Mb L: 59/61 MS: 1 ChangeByte- 00:08:08.268 [2024-07-13 19:55:55.677935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.268 [2024-07-13 19:55:55.677971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.268 [2024-07-13 19:55:55.678088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.268 [2024-07-13 19:55:55.678111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.268 [2024-07-13 19:55:55.678229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.268 [2024-07-13 19:55:55.678254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.268 #35 NEW cov: 12126 ft: 13645 corp: 11/573b lim: 85 exec/s: 0 rss: 71Mb L: 60/61 MS: 1 ChangeBinInt- 00:08:08.268 [2024-07-13 19:55:55.728425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.268 [2024-07-13 19:55:55.728463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.268 [2024-07-13 19:55:55.728588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.268 [2024-07-13 19:55:55.728615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.268 [2024-07-13 19:55:55.728737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.268 [2024-07-13 19:55:55.728759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.268 [2024-07-13 19:55:55.728887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:08.268 [2024-07-13 19:55:55.728910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.268 #36 NEW cov: 12126 ft: 14004 corp: 12/644b lim: 85 exec/s: 0 rss: 71Mb L: 71/71 MS: 1 CopyPart- 00:08:08.268 [2024-07-13 19:55:55.768058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.268 [2024-07-13 19:55:55.768086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.268 [2024-07-13 19:55:55.768216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.268 [2024-07-13 19:55:55.768238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.268 #37 NEW cov: 12126 ft: 14349 corp: 13/694b lim: 85 exec/s: 0 rss: 71Mb L: 50/71 MS: 1 EraseBytes- 00:08:08.268 [2024-07-13 19:55:55.807898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.268 [2024-07-13 19:55:55.807927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.268 #38 NEW cov: 12126 ft: 15179 corp: 14/722b lim: 85 exec/s: 0 rss: 71Mb L: 28/71 MS: 1 EraseBytes- 00:08:08.268 [2024-07-13 19:55:55.848277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.268 [2024-07-13 19:55:55.848312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.268 [2024-07-13 19:55:55.848460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.268 [2024-07-13 19:55:55.848482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.268 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:08.268 #39 NEW cov: 12149 ft: 15214 corp: 15/756b lim: 85 exec/s: 0 rss: 71Mb L: 34/71 MS: 1 EraseBytes- 00:08:08.268 [2024-07-13 19:55:55.898681] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.268 [2024-07-13 19:55:55.898717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.268 [2024-07-13 19:55:55.898837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.268 [2024-07-13 19:55:55.898855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.268 [2024-07-13 19:55:55.898988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.268 [2024-07-13 19:55:55.899010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.528 #40 NEW cov: 12149 ft: 15222 corp: 16/815b lim: 85 exec/s: 0 rss: 71Mb L: 59/71 MS: 1 CopyPart- 00:08:08.528 [2024-07-13 19:55:55.948903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.528 [2024-07-13 19:55:55.948941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.528 [2024-07-13 19:55:55.949068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.528 [2024-07-13 19:55:55.949087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.528 [2024-07-13 19:55:55.949214] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.528 [2024-07-13 19:55:55.949239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.528 #41 NEW cov: 12149 ft: 15252 corp: 17/867b lim: 85 exec/s: 41 rss: 71Mb L: 52/71 MS: 1 ShuffleBytes- 00:08:08.528 [2024-07-13 19:55:55.998468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.528 [2024-07-13 19:55:55.998500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.528 #42 NEW cov: 12149 ft: 15272 corp: 18/886b lim: 85 exec/s: 42 rss: 71Mb L: 19/71 MS: 1 EraseBytes- 00:08:08.528 [2024-07-13 19:55:56.049394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.528 [2024-07-13 19:55:56.049424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.528 [2024-07-13 19:55:56.049531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.528 [2024-07-13 19:55:56.049554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.528 [2024-07-13 19:55:56.049676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.528 [2024-07-13 19:55:56.049695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.528 [2024-07-13 19:55:56.049813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:08.528 [2024-07-13 19:55:56.049835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.528 #43 NEW cov: 12149 ft: 15286 corp: 19/969b lim: 85 exec/s: 43 rss: 71Mb L: 83/83 MS: 1 CrossOver- 00:08:08.528 [2024-07-13 19:55:56.089471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.528 [2024-07-13 19:55:56.089502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.528 [2024-07-13 19:55:56.089635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.528 [2024-07-13 19:55:56.089656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.528 [2024-07-13 19:55:56.089782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.528 [2024-07-13 19:55:56.089810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.528 [2024-07-13 19:55:56.089944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:08.528 [2024-07-13 19:55:56.089966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.528 #44 NEW cov: 12149 ft: 15323 corp: 20/1053b lim: 85 exec/s: 44 rss: 71Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:08:08.528 [2024-07-13 19:55:56.129502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.528 [2024-07-13 19:55:56.129534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.528 [2024-07-13 19:55:56.129604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.528 [2024-07-13 19:55:56.129624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.528 [2024-07-13 19:55:56.129751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.528 [2024-07-13 19:55:56.129777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.528 [2024-07-13 19:55:56.129904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:08.528 [2024-07-13 19:55:56.129933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.528 #45 NEW cov: 12149 ft: 15356 corp: 21/1137b lim: 85 exec/s: 45 rss: 71Mb L: 84/84 MS: 1 ChangeByte- 00:08:08.528 [2024-07-13 19:55:56.179031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.528 [2024-07-13 19:55:56.179057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.787 #51 NEW cov: 12149 ft: 15459 corp: 22/1156b lim: 85 exec/s: 51 rss: 71Mb L: 19/84 MS: 1 ShuffleBytes- 00:08:08.787 [2024-07-13 19:55:56.229606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.787 [2024-07-13 19:55:56.229638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.787 [2024-07-13 19:55:56.229755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.787 [2024-07-13 19:55:56.229782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.787 [2024-07-13 19:55:56.229907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.787 [2024-07-13 19:55:56.229934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.787 #52 NEW cov: 12149 ft: 15522 corp: 23/1212b lim: 85 exec/s: 52 rss: 71Mb L: 56/84 MS: 1 InsertByte- 00:08:08.787 [2024-07-13 19:55:56.269191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.787 [2024-07-13 19:55:56.269219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.787 #53 NEW cov: 12149 ft: 15554 corp: 24/1235b lim: 85 exec/s: 53 rss: 72Mb L: 23/84 MS: 1 CopyPart- 00:08:08.787 [2024-07-13 19:55:56.319847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.787 [2024-07-13 19:55:56.319879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.787 [2024-07-13 19:55:56.319994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.787 [2024-07-13 19:55:56.320015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.787 [2024-07-13 19:55:56.320136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.787 [2024-07-13 19:55:56.320159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.787 #54 NEW cov: 12149 ft: 15561 corp: 25/1290b lim: 85 exec/s: 54 rss: 72Mb L: 55/84 MS: 1 ChangeBit- 00:08:08.787 [2024-07-13 19:55:56.370059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.787 [2024-07-13 19:55:56.370090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.787 [2024-07-13 19:55:56.370198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.787 [2024-07-13 19:55:56.370220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.787 [2024-07-13 19:55:56.370340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.787 [2024-07-13 19:55:56.370361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.787 #55 NEW cov: 12149 ft: 15593 corp: 26/1348b lim: 85 exec/s: 55 rss: 72Mb L: 58/84 MS: 1 CopyPart- 00:08:08.787 [2024-07-13 19:55:56.420409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.787 [2024-07-13 19:55:56.420446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.787 [2024-07-13 19:55:56.420564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.787 [2024-07-13 19:55:56.420587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.787 [2024-07-13 19:55:56.420699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.787 [2024-07-13 19:55:56.420720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.787 [2024-07-13 19:55:56.420841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:08.787 [2024-07-13 19:55:56.420861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:08.787 #56 NEW cov: 12149 ft: 15598 corp: 27/1432b lim: 85 exec/s: 56 rss: 72Mb L: 84/84 MS: 1 ChangeBit- 00:08:09.046 [2024-07-13 19:55:56.459897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:09.046 [2024-07-13 19:55:56.459924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.046 #57 NEW cov: 12149 ft: 15630 corp: 28/1461b lim: 85 exec/s: 57 rss: 72Mb L: 29/84 MS: 1 InsertByte- 00:08:09.046 [2024-07-13 19:55:56.510234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:09.046 [2024-07-13 19:55:56.510268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.046 [2024-07-13 19:55:56.510389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:09.046 [2024-07-13 19:55:56.510413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.046 #58 NEW cov: 12149 ft: 15633 corp: 29/1507b lim: 85 exec/s: 58 rss: 72Mb L: 46/84 MS: 1 EraseBytes- 00:08:09.046 [2024-07-13 19:55:56.550627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:09.046 [2024-07-13 19:55:56.550658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.046 [2024-07-13 19:55:56.550763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:09.046 [2024-07-13 19:55:56.550789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.046 [2024-07-13 19:55:56.550917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:09.046 [2024-07-13 19:55:56.550942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.046 #59 NEW cov: 12149 ft: 15660 corp: 30/1566b lim: 85 exec/s: 59 rss: 72Mb L: 59/84 MS: 1 ChangeByte- 00:08:09.046 [2024-07-13 19:55:56.590933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:09.046 [2024-07-13 19:55:56.590961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.046 [2024-07-13 19:55:56.591055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:09.046 [2024-07-13 19:55:56.591075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.046 [2024-07-13 19:55:56.591194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:09.046 [2024-07-13 19:55:56.591215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.046 [2024-07-13 19:55:56.591334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:09.046 [2024-07-13 19:55:56.591354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.046 #60 NEW cov: 12149 ft: 15676 corp: 31/1642b lim: 85 exec/s: 60 rss: 72Mb L: 76/84 MS: 1 InsertRepeatedBytes- 00:08:09.046 [2024-07-13 19:55:56.630847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:09.046 [2024-07-13 19:55:56.630880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.046 [2024-07-13 19:55:56.630997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:09.046 [2024-07-13 19:55:56.631018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.046 [2024-07-13 19:55:56.631145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:09.046 [2024-07-13 19:55:56.631166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.046 #61 NEW cov: 12149 ft: 15689 corp: 32/1704b lim: 85 exec/s: 61 rss: 72Mb L: 62/84 MS: 1 CopyPart- 00:08:09.046 [2024-07-13 19:55:56.680993] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:09.046 [2024-07-13 19:55:56.681025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.046 [2024-07-13 19:55:56.681141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:09.046 [2024-07-13 19:55:56.681172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.046 [2024-07-13 19:55:56.681291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:09.046 [2024-07-13 19:55:56.681310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.306 #62 NEW cov: 12149 ft: 15724 corp: 33/1756b lim: 85 exec/s: 62 rss: 72Mb L: 52/84 MS: 1 CopyPart- 00:08:09.306 [2024-07-13 19:55:56.731119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:09.306 [2024-07-13 19:55:56.731155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.306 [2024-07-13 19:55:56.731270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:09.306 [2024-07-13 19:55:56.731292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.306 [2024-07-13 19:55:56.731406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:09.306 [2024-07-13 19:55:56.731430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.306 #63 NEW cov: 12149 ft: 15728 corp: 34/1813b lim: 85 exec/s: 63 rss: 72Mb L: 57/84 MS: 1 InsertByte- 00:08:09.306 [2024-07-13 19:55:56.781278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:09.306 [2024-07-13 19:55:56.781311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.306 [2024-07-13 19:55:56.781460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:09.306 [2024-07-13 19:55:56.781475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.306 [2024-07-13 19:55:56.781587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:09.306 [2024-07-13 19:55:56.781609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.306 #64 NEW cov: 12149 ft: 15734 corp: 35/1879b lim: 85 exec/s: 64 rss: 72Mb L: 66/84 MS: 1 InsertRepeatedBytes- 00:08:09.306 [2024-07-13 19:55:56.830942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:09.306 [2024-07-13 19:55:56.830975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.306 #65 NEW cov: 12149 ft: 15739 corp: 36/1903b lim: 85 exec/s: 65 rss: 72Mb L: 24/84 MS: 1 CrossOver- 00:08:09.306 [2024-07-13 19:55:56.871841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:09.306 [2024-07-13 19:55:56.871871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.306 [2024-07-13 19:55:56.871997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:09.306 [2024-07-13 19:55:56.872020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.306 [2024-07-13 19:55:56.872142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:09.306 [2024-07-13 19:55:56.872162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.306 [2024-07-13 19:55:56.872285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:09.306 [2024-07-13 19:55:56.872308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.306 #68 NEW cov: 12149 ft: 15748 corp: 37/1975b lim: 85 exec/s: 68 rss: 72Mb L: 72/84 MS: 3 CrossOver-CMP-InsertRepeatedBytes- DE: "\377\377\376\377"- 00:08:09.306 [2024-07-13 19:55:56.911754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:09.306 [2024-07-13 19:55:56.911784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.306 [2024-07-13 19:55:56.911910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:09.306 [2024-07-13 19:55:56.911935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.307 [2024-07-13 19:55:56.912057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:09.307 [2024-07-13 19:55:56.912080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.307 #69 NEW cov: 12149 ft: 15762 corp: 38/2037b lim: 85 exec/s: 34 rss: 73Mb L: 62/84 MS: 1 ChangeBinInt- 00:08:09.307 #69 DONE cov: 12149 ft: 15762 corp: 38/2037b lim: 85 exec/s: 34 rss: 73Mb 00:08:09.307 ###### Recommended dictionary. ###### 00:08:09.307 "\377\377\376\377" # Uses: 0 00:08:09.307 ###### End of recommended dictionary. ###### 00:08:09.307 Done 69 runs in 2 second(s) 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4423 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:09.566 19:55:57 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:08:09.566 [2024-07-13 19:55:57.106138] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:09.566 [2024-07-13 19:55:57.106204] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3683943 ] 00:08:09.566 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.825 [2024-07-13 19:55:57.350918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.825 [2024-07-13 19:55:57.382400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.825 [2024-07-13 19:55:57.434868] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.825 [2024-07-13 19:55:57.451181] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:09.825 INFO: Running with entropic power schedule (0xFF, 100). 00:08:09.825 INFO: Seed: 341471289 00:08:09.825 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:08:09.825 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:08:09.825 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:09.825 INFO: A corpus is not provided, starting from an empty corpus 00:08:09.825 #2 INITED exec/s: 0 rss: 63Mb 00:08:09.825 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:09.825 This may also happen if the target rejected all inputs we tried so far 00:08:10.085 [2024-07-13 19:55:57.496586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.085 [2024-07-13 19:55:57.496617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.085 [2024-07-13 19:55:57.496657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.085 [2024-07-13 19:55:57.496674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.085 [2024-07-13 19:55:57.496732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:10.085 [2024-07-13 19:55:57.496749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.085 [2024-07-13 19:55:57.496806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:10.085 [2024-07-13 19:55:57.496822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.344 NEW_FUNC[1/692]: 0x4be370 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:10.344 NEW_FUNC[2/692]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:10.344 #3 NEW cov: 11838 ft: 11823 corp: 2/25b lim: 25 exec/s: 0 rss: 69Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:08:10.344 [2024-07-13 19:55:57.817116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.344 [2024-07-13 19:55:57.817151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.344 [2024-07-13 19:55:57.817210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.344 [2024-07-13 19:55:57.817228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.344 #6 NEW cov: 11968 ft: 12979 corp: 3/37b lim: 25 exec/s: 0 rss: 69Mb L: 12/24 MS: 3 InsertByte-ShuffleBytes-CrossOver- 00:08:10.344 [2024-07-13 19:55:57.857180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.344 [2024-07-13 19:55:57.857208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.344 [2024-07-13 19:55:57.857259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.344 [2024-07-13 19:55:57.857276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.344 #7 NEW cov: 11974 ft: 13181 corp: 4/49b lim: 25 exec/s: 0 rss: 69Mb L: 12/24 MS: 1 ChangeBit- 00:08:10.344 [2024-07-13 19:55:57.907291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.344 [2024-07-13 19:55:57.907318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.344 [2024-07-13 19:55:57.907388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.344 [2024-07-13 19:55:57.907404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.344 #8 NEW cov: 12059 ft: 13399 corp: 5/61b lim: 25 exec/s: 0 rss: 70Mb L: 12/24 MS: 1 CMP- DE: "\003\000\000\000"- 00:08:10.344 [2024-07-13 19:55:57.957453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.344 [2024-07-13 19:55:57.957480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.344 [2024-07-13 19:55:57.957522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.345 [2024-07-13 19:55:57.957536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.345 #9 NEW cov: 12059 ft: 13493 corp: 6/72b lim: 25 exec/s: 0 rss: 70Mb L: 11/24 MS: 1 EraseBytes- 00:08:10.345 [2024-07-13 19:55:57.997531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.345 [2024-07-13 19:55:57.997558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.345 [2024-07-13 19:55:57.997600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.345 [2024-07-13 19:55:57.997616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.604 #10 NEW cov: 12059 ft: 13621 corp: 7/83b lim: 25 exec/s: 0 rss: 70Mb L: 11/24 MS: 1 PersAutoDict- DE: "\003\000\000\000"- 00:08:10.604 [2024-07-13 19:55:58.047673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.604 [2024-07-13 19:55:58.047700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.604 [2024-07-13 19:55:58.047755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.604 [2024-07-13 19:55:58.047772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.604 #11 NEW cov: 12059 ft: 13673 corp: 8/95b lim: 25 exec/s: 0 rss: 70Mb L: 12/24 MS: 1 ShuffleBytes- 00:08:10.604 [2024-07-13 19:55:58.097787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.604 [2024-07-13 19:55:58.097814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.604 [2024-07-13 19:55:58.097864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.604 [2024-07-13 19:55:58.097881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.604 #12 NEW cov: 12059 ft: 13697 corp: 9/107b lim: 25 exec/s: 0 rss: 70Mb L: 12/24 MS: 1 InsertByte- 00:08:10.604 [2024-07-13 19:55:58.137946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.604 [2024-07-13 19:55:58.137972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.604 [2024-07-13 19:55:58.138037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.604 [2024-07-13 19:55:58.138057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.604 #13 NEW cov: 12059 ft: 13739 corp: 10/121b lim: 25 exec/s: 0 rss: 70Mb L: 14/24 MS: 1 CMP- DE: "\377\002"- 00:08:10.604 [2024-07-13 19:55:58.188048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.604 [2024-07-13 19:55:58.188074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.604 [2024-07-13 19:55:58.188113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.604 [2024-07-13 19:55:58.188129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.604 #14 NEW cov: 12059 ft: 13813 corp: 11/134b lim: 25 exec/s: 0 rss: 70Mb L: 13/24 MS: 1 InsertByte- 00:08:10.604 [2024-07-13 19:55:58.228384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.604 [2024-07-13 19:55:58.228411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.604 [2024-07-13 19:55:58.228465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.604 [2024-07-13 19:55:58.228481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.604 [2024-07-13 19:55:58.228532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:10.604 [2024-07-13 19:55:58.228548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.604 [2024-07-13 19:55:58.228602] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:10.604 [2024-07-13 19:55:58.228617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.604 #15 NEW cov: 12059 ft: 13842 corp: 12/158b lim: 25 exec/s: 0 rss: 70Mb L: 24/24 MS: 1 ShuffleBytes- 00:08:10.863 [2024-07-13 19:55:58.278292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.864 [2024-07-13 19:55:58.278320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.864 [2024-07-13 19:55:58.278374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.864 [2024-07-13 19:55:58.278391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.864 #16 NEW cov: 12059 ft: 13929 corp: 13/169b lim: 25 exec/s: 0 rss: 70Mb L: 11/24 MS: 1 EraseBytes- 00:08:10.864 [2024-07-13 19:55:58.328728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.864 [2024-07-13 19:55:58.328756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.864 [2024-07-13 19:55:58.328805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.864 [2024-07-13 19:55:58.328822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.864 [2024-07-13 19:55:58.328874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:10.864 [2024-07-13 19:55:58.328889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.864 [2024-07-13 19:55:58.328945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:10.864 [2024-07-13 19:55:58.328961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.864 #17 NEW cov: 12059 ft: 13945 corp: 14/193b lim: 25 exec/s: 0 rss: 70Mb L: 24/24 MS: 1 ChangeBit- 00:08:10.864 [2024-07-13 19:55:58.378585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.864 [2024-07-13 19:55:58.378614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.864 [2024-07-13 19:55:58.378664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.864 [2024-07-13 19:55:58.378680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.864 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:10.864 #18 NEW cov: 12082 ft: 13992 corp: 15/204b lim: 25 exec/s: 0 rss: 70Mb L: 11/24 MS: 1 ChangeByte- 00:08:10.864 [2024-07-13 19:55:58.418928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.864 [2024-07-13 19:55:58.418955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.864 [2024-07-13 19:55:58.419003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.864 [2024-07-13 19:55:58.419019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.864 [2024-07-13 19:55:58.419076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:10.864 [2024-07-13 19:55:58.419091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.864 [2024-07-13 19:55:58.419148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:10.864 [2024-07-13 19:55:58.419164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.864 #19 NEW cov: 12082 ft: 14002 corp: 16/228b lim: 25 exec/s: 0 rss: 70Mb L: 24/24 MS: 1 ChangeBit- 00:08:10.864 [2024-07-13 19:55:58.468891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.864 [2024-07-13 19:55:58.468918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.864 [2024-07-13 19:55:58.468958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.864 [2024-07-13 19:55:58.468974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.864 #20 NEW cov: 12082 ft: 14019 corp: 17/239b lim: 25 exec/s: 20 rss: 70Mb L: 11/24 MS: 1 ChangeBit- 00:08:10.864 [2024-07-13 19:55:58.509002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.864 [2024-07-13 19:55:58.509029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.864 [2024-07-13 19:55:58.509068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.864 [2024-07-13 19:55:58.509084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.123 #21 NEW cov: 12082 ft: 14060 corp: 18/252b lim: 25 exec/s: 21 rss: 70Mb L: 13/24 MS: 1 PersAutoDict- DE: "\377\002"- 00:08:11.123 [2024-07-13 19:55:58.559275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.124 [2024-07-13 19:55:58.559304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.124 [2024-07-13 19:55:58.559342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.124 [2024-07-13 19:55:58.559359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.124 [2024-07-13 19:55:58.559421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:11.124 [2024-07-13 19:55:58.559438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.124 #22 NEW cov: 12082 ft: 14265 corp: 19/268b lim: 25 exec/s: 22 rss: 70Mb L: 16/24 MS: 1 CopyPart- 00:08:11.124 [2024-07-13 19:55:58.599247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.124 [2024-07-13 19:55:58.599274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.124 [2024-07-13 19:55:58.599340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.124 [2024-07-13 19:55:58.599357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.124 #23 NEW cov: 12082 ft: 14266 corp: 20/280b lim: 25 exec/s: 23 rss: 70Mb L: 12/24 MS: 1 CopyPart- 00:08:11.124 [2024-07-13 19:55:58.639360] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.124 [2024-07-13 19:55:58.639387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.124 [2024-07-13 19:55:58.639446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.124 [2024-07-13 19:55:58.639464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.124 #29 NEW cov: 12082 ft: 14305 corp: 21/292b lim: 25 exec/s: 29 rss: 70Mb L: 12/24 MS: 1 ChangeByte- 00:08:11.124 [2024-07-13 19:55:58.689508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.124 [2024-07-13 19:55:58.689536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.124 [2024-07-13 19:55:58.689598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.124 [2024-07-13 19:55:58.689617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.124 #30 NEW cov: 12082 ft: 14313 corp: 22/304b lim: 25 exec/s: 30 rss: 70Mb L: 12/24 MS: 1 InsertByte- 00:08:11.124 [2024-07-13 19:55:58.739901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.124 [2024-07-13 19:55:58.739928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.124 [2024-07-13 19:55:58.739986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.124 [2024-07-13 19:55:58.740003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.124 [2024-07-13 19:55:58.740059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:11.124 [2024-07-13 19:55:58.740076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.124 [2024-07-13 19:55:58.740131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:11.124 [2024-07-13 19:55:58.740148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.124 #31 NEW cov: 12082 ft: 14334 corp: 23/328b lim: 25 exec/s: 31 rss: 70Mb L: 24/24 MS: 1 CMP- DE: "\377\377\001\000"- 00:08:11.124 [2024-07-13 19:55:58.779751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.124 [2024-07-13 19:55:58.779780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.124 [2024-07-13 19:55:58.779842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.124 [2024-07-13 19:55:58.779859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.383 #32 NEW cov: 12082 ft: 14408 corp: 24/340b lim: 25 exec/s: 32 rss: 70Mb L: 12/24 MS: 1 ChangeBinInt- 00:08:11.383 [2024-07-13 19:55:58.819828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.384 [2024-07-13 19:55:58.819855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.384 [2024-07-13 19:55:58.819911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.384 [2024-07-13 19:55:58.819927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.384 #33 NEW cov: 12082 ft: 14418 corp: 25/353b lim: 25 exec/s: 33 rss: 70Mb L: 13/24 MS: 1 ChangeBinInt- 00:08:11.384 [2024-07-13 19:55:58.870382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.384 [2024-07-13 19:55:58.870409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.384 [2024-07-13 19:55:58.870487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.384 [2024-07-13 19:55:58.870502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.384 [2024-07-13 19:55:58.870557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:11.384 [2024-07-13 19:55:58.870574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.384 [2024-07-13 19:55:58.870627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:11.384 [2024-07-13 19:55:58.870642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.384 [2024-07-13 19:55:58.870697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:11.384 [2024-07-13 19:55:58.870713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:11.384 #34 NEW cov: 12082 ft: 14497 corp: 26/378b lim: 25 exec/s: 34 rss: 71Mb L: 25/25 MS: 1 CopyPart- 00:08:11.384 [2024-07-13 19:55:58.920482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.384 [2024-07-13 19:55:58.920509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.384 [2024-07-13 19:55:58.920583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.384 [2024-07-13 19:55:58.920598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.384 [2024-07-13 19:55:58.920653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:11.384 [2024-07-13 19:55:58.920669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.384 [2024-07-13 19:55:58.920723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:11.384 [2024-07-13 19:55:58.920739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.384 [2024-07-13 19:55:58.920797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:08:11.384 [2024-07-13 19:55:58.920813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:11.384 #35 NEW cov: 12082 ft: 14519 corp: 27/403b lim: 25 exec/s: 35 rss: 71Mb L: 25/25 MS: 1 InsertByte- 00:08:11.384 [2024-07-13 19:55:58.970286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.384 [2024-07-13 19:55:58.970313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.384 [2024-07-13 19:55:58.970355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.384 [2024-07-13 19:55:58.970373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.384 #36 NEW cov: 12082 ft: 14528 corp: 28/416b lim: 25 exec/s: 36 rss: 71Mb L: 13/25 MS: 1 ShuffleBytes- 00:08:11.384 [2024-07-13 19:55:59.020415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.384 [2024-07-13 19:55:59.020448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.384 [2024-07-13 19:55:59.020522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.384 [2024-07-13 19:55:59.020541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.644 #37 NEW cov: 12082 ft: 14553 corp: 29/428b lim: 25 exec/s: 37 rss: 71Mb L: 12/25 MS: 1 ChangeByte- 00:08:11.644 [2024-07-13 19:55:59.070643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.644 [2024-07-13 19:55:59.070670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.644 [2024-07-13 19:55:59.070728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.644 [2024-07-13 19:55:59.070746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.644 #43 NEW cov: 12082 ft: 14554 corp: 30/440b lim: 25 exec/s: 43 rss: 71Mb L: 12/25 MS: 1 PersAutoDict- DE: "\003\000\000\000"- 00:08:11.644 [2024-07-13 19:55:59.110846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.644 [2024-07-13 19:55:59.110874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.644 [2024-07-13 19:55:59.110913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.644 [2024-07-13 19:55:59.110931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.644 [2024-07-13 19:55:59.110990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:11.644 [2024-07-13 19:55:59.111007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.644 #47 NEW cov: 12082 ft: 14655 corp: 31/456b lim: 25 exec/s: 47 rss: 71Mb L: 16/25 MS: 4 ShuffleBytes-CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:08:11.644 [2024-07-13 19:55:59.150808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.644 [2024-07-13 19:55:59.150835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.644 [2024-07-13 19:55:59.150891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.644 [2024-07-13 19:55:59.150908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.644 #48 NEW cov: 12082 ft: 14664 corp: 32/468b lim: 25 exec/s: 48 rss: 71Mb L: 12/25 MS: 1 ChangeBinInt- 00:08:11.644 [2024-07-13 19:55:59.191136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.644 [2024-07-13 19:55:59.191165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.644 [2024-07-13 19:55:59.191224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.644 [2024-07-13 19:55:59.191238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.644 [2024-07-13 19:55:59.191297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:11.644 [2024-07-13 19:55:59.191313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.644 [2024-07-13 19:55:59.191376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:11.644 [2024-07-13 19:55:59.191393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.644 #49 NEW cov: 12082 ft: 14676 corp: 33/491b lim: 25 exec/s: 49 rss: 71Mb L: 23/25 MS: 1 EraseBytes- 00:08:11.644 [2024-07-13 19:55:59.231025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.644 [2024-07-13 19:55:59.231051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.644 [2024-07-13 19:55:59.231108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.644 [2024-07-13 19:55:59.231123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.644 #50 NEW cov: 12082 ft: 14683 corp: 34/504b lim: 25 exec/s: 50 rss: 71Mb L: 13/25 MS: 1 CrossOver- 00:08:11.644 [2024-07-13 19:55:59.281192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.644 [2024-07-13 19:55:59.281218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.644 [2024-07-13 19:55:59.281276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.644 [2024-07-13 19:55:59.281293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.904 #51 NEW cov: 12082 ft: 14703 corp: 35/516b lim: 25 exec/s: 51 rss: 71Mb L: 12/25 MS: 1 ChangeByte- 00:08:11.904 [2024-07-13 19:55:59.331452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.904 [2024-07-13 19:55:59.331478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.904 [2024-07-13 19:55:59.331542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.904 [2024-07-13 19:55:59.331559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.904 [2024-07-13 19:55:59.331619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:11.904 [2024-07-13 19:55:59.331636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.904 #52 NEW cov: 12082 ft: 14710 corp: 36/533b lim: 25 exec/s: 52 rss: 71Mb L: 17/25 MS: 1 EraseBytes- 00:08:11.904 [2024-07-13 19:55:59.371693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.904 [2024-07-13 19:55:59.371720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.904 [2024-07-13 19:55:59.371786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.904 [2024-07-13 19:55:59.371803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.904 [2024-07-13 19:55:59.371863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:11.904 [2024-07-13 19:55:59.371880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.904 [2024-07-13 19:55:59.371938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:11.904 [2024-07-13 19:55:59.371955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.904 #53 NEW cov: 12082 ft: 14726 corp: 37/554b lim: 25 exec/s: 53 rss: 71Mb L: 21/25 MS: 1 InsertRepeatedBytes- 00:08:11.904 [2024-07-13 19:55:59.411554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.904 [2024-07-13 19:55:59.411580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.904 [2024-07-13 19:55:59.411636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.904 [2024-07-13 19:55:59.411653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.904 #54 NEW cov: 12082 ft: 14731 corp: 38/567b lim: 25 exec/s: 54 rss: 71Mb L: 13/25 MS: 1 ChangeBit- 00:08:11.904 [2024-07-13 19:55:59.451775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:11.904 [2024-07-13 19:55:59.451801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.904 [2024-07-13 19:55:59.451857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:11.904 [2024-07-13 19:55:59.451873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.904 [2024-07-13 19:55:59.451928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:11.904 [2024-07-13 19:55:59.451944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.904 #55 NEW cov: 12082 ft: 14740 corp: 39/584b lim: 25 exec/s: 27 rss: 71Mb L: 17/25 MS: 1 ChangeBit- 00:08:11.904 #55 DONE cov: 12082 ft: 14740 corp: 39/584b lim: 25 exec/s: 27 rss: 71Mb 00:08:11.904 ###### Recommended dictionary. ###### 00:08:11.904 "\003\000\000\000" # Uses: 2 00:08:11.904 "\377\002" # Uses: 1 00:08:11.904 "\377\377\001\000" # Uses: 0 00:08:11.904 ###### End of recommended dictionary. ###### 00:08:11.904 Done 55 runs in 2 second(s) 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4424 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:12.164 19:55:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:08:12.164 [2024-07-13 19:55:59.643915] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:12.164 [2024-07-13 19:55:59.643988] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3684343 ] 00:08:12.164 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.423 [2024-07-13 19:55:59.899587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.423 [2024-07-13 19:55:59.929316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.423 [2024-07-13 19:55:59.981798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.423 [2024-07-13 19:55:59.998110] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:12.423 INFO: Running with entropic power schedule (0xFF, 100). 00:08:12.423 INFO: Seed: 2886437850 00:08:12.423 INFO: Loaded 1 modules (357263 inline 8-bit counters): 357263 [0x27e3c0c, 0x283af9b), 00:08:12.423 INFO: Loaded 1 PC tables (357263 PCs): 357263 [0x283afa0,0x2dae890), 00:08:12.423 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:12.423 INFO: A corpus is not provided, starting from an empty corpus 00:08:12.423 #2 INITED exec/s: 0 rss: 62Mb 00:08:12.423 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:12.423 This may also happen if the target rejected all inputs we tried so far 00:08:12.423 [2024-07-13 19:56:00.068222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.423 [2024-07-13 19:56:00.068262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.941 NEW_FUNC[1/693]: 0x4bf450 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:12.942 NEW_FUNC[2/693]: 0x4d00b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:12.942 #8 NEW cov: 11913 ft: 11914 corp: 2/40b lim: 100 exec/s: 0 rss: 69Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:08:12.942 [2024-07-13 19:56:00.419025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.942 [2024-07-13 19:56:00.419078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.942 [2024-07-13 19:56:00.419203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.942 [2024-07-13 19:56:00.419235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.942 [2024-07-13 19:56:00.419369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.942 [2024-07-13 19:56:00.419397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.942 #11 NEW cov: 12043 ft: 13527 corp: 3/113b lim: 100 exec/s: 0 rss: 69Mb L: 73/73 MS: 3 CopyPart-CopyPart-InsertRepeatedBytes- 00:08:12.942 [2024-07-13 19:56:00.468983] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15697817502389754329 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.942 [2024-07-13 19:56:00.469019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.942 [2024-07-13 19:56:00.469131] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.942 [2024-07-13 19:56:00.469154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.942 [2024-07-13 19:56:00.469276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.942 [2024-07-13 19:56:00.469300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.942 #15 NEW cov: 12049 ft: 13845 corp: 4/182b lim: 100 exec/s: 0 rss: 69Mb L: 69/73 MS: 4 CrossOver-CopyPart-InsertByte-InsertRepeatedBytes- 00:08:12.942 [2024-07-13 19:56:00.509037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.942 [2024-07-13 19:56:00.509068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.942 [2024-07-13 19:56:00.509168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.942 [2024-07-13 19:56:00.509192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.942 [2024-07-13 19:56:00.509302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:268435456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.942 [2024-07-13 19:56:00.509324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.942 #16 NEW cov: 12134 ft: 14048 corp: 5/255b lim: 100 exec/s: 0 rss: 69Mb L: 73/73 MS: 1 ChangeBit- 00:08:12.942 [2024-07-13 19:56:00.559377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.942 [2024-07-13 19:56:00.559410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.942 [2024-07-13 19:56:00.559531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.942 [2024-07-13 19:56:00.559554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.942 [2024-07-13 19:56:00.559677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.942 [2024-07-13 19:56:00.559697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.942 #17 NEW cov: 12134 ft: 14094 corp: 6/328b lim: 100 exec/s: 0 rss: 69Mb L: 73/73 MS: 1 ShuffleBytes- 00:08:12.942 [2024-07-13 19:56:00.599474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.942 [2024-07-13 19:56:00.599502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.942 [2024-07-13 19:56:00.599607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.942 [2024-07-13 19:56:00.599638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.942 [2024-07-13 19:56:00.599761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.942 [2024-07-13 19:56:00.599790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.201 #18 NEW cov: 12134 ft: 14159 corp: 7/389b lim: 100 exec/s: 0 rss: 69Mb L: 61/73 MS: 1 EraseBytes- 00:08:13.201 [2024-07-13 19:56:00.649552] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.201 [2024-07-13 19:56:00.649583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.201 [2024-07-13 19:56:00.649694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:274877906944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.201 [2024-07-13 19:56:00.649719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.201 [2024-07-13 19:56:00.649837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.201 [2024-07-13 19:56:00.649859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.201 #19 NEW cov: 12134 ft: 14276 corp: 8/463b lim: 100 exec/s: 0 rss: 70Mb L: 74/74 MS: 1 InsertByte- 00:08:13.201 [2024-07-13 19:56:00.689693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.201 [2024-07-13 19:56:00.689724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.201 [2024-07-13 19:56:00.689823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.201 [2024-07-13 19:56:00.689852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.201 [2024-07-13 19:56:00.689963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:268435456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.201 [2024-07-13 19:56:00.689982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.201 #20 NEW cov: 12134 ft: 14330 corp: 9/536b lim: 100 exec/s: 0 rss: 70Mb L: 73/74 MS: 1 ShuffleBytes- 00:08:13.201 [2024-07-13 19:56:00.740009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15697817502389754329 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.201 [2024-07-13 19:56:00.740039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.201 [2024-07-13 19:56:00.740128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.201 [2024-07-13 19:56:00.740151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.201 [2024-07-13 19:56:00.740272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.201 [2024-07-13 19:56:00.740297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.201 [2024-07-13 19:56:00.740426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.201 [2024-07-13 19:56:00.740448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.201 #21 NEW cov: 12134 ft: 14731 corp: 10/629b lim: 100 exec/s: 0 rss: 70Mb L: 93/93 MS: 1 CopyPart- 00:08:13.201 [2024-07-13 19:56:00.790217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.201 [2024-07-13 19:56:00.790254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.201 [2024-07-13 19:56:00.790367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.201 [2024-07-13 19:56:00.790391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.201 [2024-07-13 19:56:00.790508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:268435456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.201 [2024-07-13 19:56:00.790530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.201 [2024-07-13 19:56:00.790648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.201 [2024-07-13 19:56:00.790672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.201 #22 NEW cov: 12134 ft: 14788 corp: 11/727b lim: 100 exec/s: 0 rss: 70Mb L: 98/98 MS: 1 CrossOver- 00:08:13.201 [2024-07-13 19:56:00.839578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.201 [2024-07-13 19:56:00.839607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.519 #23 NEW cov: 12134 ft: 14890 corp: 12/766b lim: 100 exec/s: 0 rss: 70Mb L: 39/98 MS: 1 ChangeByte- 00:08:13.519 [2024-07-13 19:56:00.890304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.519 [2024-07-13 19:56:00.890335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.519 [2024-07-13 19:56:00.890448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.519 [2024-07-13 19:56:00.890468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.519 [2024-07-13 19:56:00.890582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:268435456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.519 [2024-07-13 19:56:00.890602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.519 #24 NEW cov: 12134 ft: 14936 corp: 13/839b lim: 100 exec/s: 0 rss: 70Mb L: 73/98 MS: 1 ShuffleBytes- 00:08:13.519 [2024-07-13 19:56:00.929873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.519 [2024-07-13 19:56:00.929905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.519 NEW_FUNC[1/1]: 0x1a826f0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:13.519 #25 NEW cov: 12157 ft: 14970 corp: 14/876b lim: 100 exec/s: 0 rss: 70Mb L: 37/98 MS: 1 EraseBytes- 00:08:13.519 [2024-07-13 19:56:00.980576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.519 [2024-07-13 19:56:00.980611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.519 [2024-07-13 19:56:00.980719] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.519 [2024-07-13 19:56:00.980741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.519 [2024-07-13 19:56:00.980864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.519 [2024-07-13 19:56:00.980888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.519 #26 NEW cov: 12157 ft: 15024 corp: 15/949b lim: 100 exec/s: 0 rss: 70Mb L: 73/98 MS: 1 ShuffleBytes- 00:08:13.519 [2024-07-13 19:56:01.020859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.519 [2024-07-13 19:56:01.020891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.519 [2024-07-13 19:56:01.020962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:274877906944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.519 [2024-07-13 19:56:01.020987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.519 [2024-07-13 19:56:01.021107] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.519 [2024-07-13 19:56:01.021132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.519 [2024-07-13 19:56:01.021257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.519 [2024-07-13 19:56:01.021284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.519 #27 NEW cov: 12157 ft: 15059 corp: 16/1036b lim: 100 exec/s: 27 rss: 70Mb L: 87/98 MS: 1 CrossOver- 00:08:13.519 [2024-07-13 19:56:01.060751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.519 [2024-07-13 19:56:01.060782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.519 [2024-07-13 19:56:01.060880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:274877906944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.519 [2024-07-13 19:56:01.060906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.519 [2024-07-13 19:56:01.061033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4611686018427387904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.519 [2024-07-13 19:56:01.061057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.519 #28 NEW cov: 12157 ft: 15073 corp: 17/1113b lim: 100 exec/s: 28 rss: 70Mb L: 77/98 MS: 1 EraseBytes- 00:08:13.519 [2024-07-13 19:56:01.110648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1953184666628070171 len:6940 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.519 [2024-07-13 19:56:01.110678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.519 #30 NEW cov: 12157 ft: 15164 corp: 18/1148b lim: 100 exec/s: 30 rss: 70Mb L: 35/98 MS: 2 ChangeBit-InsertRepeatedBytes- 00:08:13.778 [2024-07-13 19:56:01.151027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.778 [2024-07-13 19:56:01.151062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.778 [2024-07-13 19:56:01.151159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.778 [2024-07-13 19:56:01.151186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.778 [2024-07-13 19:56:01.151310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.778 [2024-07-13 19:56:01.151333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.778 #31 NEW cov: 12157 ft: 15294 corp: 19/1209b lim: 100 exec/s: 31 rss: 70Mb L: 61/98 MS: 1 CopyPart- 00:08:13.778 [2024-07-13 19:56:01.201161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.778 [2024-07-13 19:56:01.201195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.778 [2024-07-13 19:56:01.201309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:274877906944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.778 [2024-07-13 19:56:01.201331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.778 [2024-07-13 19:56:01.201455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.778 [2024-07-13 19:56:01.201477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.778 #32 NEW cov: 12157 ft: 15352 corp: 20/1283b lim: 100 exec/s: 32 rss: 70Mb L: 74/98 MS: 1 ShuffleBytes- 00:08:13.778 [2024-07-13 19:56:01.241222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:11540474212909056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.778 [2024-07-13 19:56:01.241256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.778 [2024-07-13 19:56:01.241357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.778 [2024-07-13 19:56:01.241384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.778 [2024-07-13 19:56:01.241509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.778 [2024-07-13 19:56:01.241533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.778 #33 NEW cov: 12157 ft: 15365 corp: 21/1345b lim: 100 exec/s: 33 rss: 70Mb L: 62/98 MS: 1 InsertByte- 00:08:13.778 [2024-07-13 19:56:01.280873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:11284181956786117032 len:6940 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.778 [2024-07-13 19:56:01.280904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.778 #36 NEW cov: 12157 ft: 15444 corp: 22/1372b lim: 100 exec/s: 36 rss: 70Mb L: 27/98 MS: 3 EraseBytes-ChangeBit-CMP- DE: "\331\250\234\231w\264)\000"- 00:08:13.779 [2024-07-13 19:56:01.331717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.779 [2024-07-13 19:56:01.331751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.779 [2024-07-13 19:56:01.331858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:274877906944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.779 [2024-07-13 19:56:01.331883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.779 [2024-07-13 19:56:01.332007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.779 [2024-07-13 19:56:01.332035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.779 [2024-07-13 19:56:01.332164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.779 [2024-07-13 19:56:01.332192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.779 #37 NEW cov: 12157 ft: 15475 corp: 23/1459b lim: 100 exec/s: 37 rss: 70Mb L: 87/98 MS: 1 ChangeBinInt- 00:08:13.779 [2024-07-13 19:56:01.371890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.779 [2024-07-13 19:56:01.371928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.779 [2024-07-13 19:56:01.372048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.779 [2024-07-13 19:56:01.372075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.779 [2024-07-13 19:56:01.372202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:268435456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.779 [2024-07-13 19:56:01.372227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.779 [2024-07-13 19:56:01.372357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.779 [2024-07-13 19:56:01.372385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.779 #38 NEW cov: 12157 ft: 15477 corp: 24/1558b lim: 100 exec/s: 38 rss: 70Mb L: 99/99 MS: 1 CopyPart- 00:08:13.779 [2024-07-13 19:56:01.422033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15697817502389754329 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.779 [2024-07-13 19:56:01.422069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.779 [2024-07-13 19:56:01.422190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.779 [2024-07-13 19:56:01.422218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.779 [2024-07-13 19:56:01.422344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2954600884213479348 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.779 [2024-07-13 19:56:01.422370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.779 [2024-07-13 19:56:01.422497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.779 [2024-07-13 19:56:01.422521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.038 #44 NEW cov: 12157 ft: 15511 corp: 25/1651b lim: 100 exec/s: 44 rss: 70Mb L: 93/99 MS: 1 PersAutoDict- DE: "\331\250\234\231w\264)\000"- 00:08:14.038 [2024-07-13 19:56:01.481434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.038 [2024-07-13 19:56:01.481472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.038 #45 NEW cov: 12157 ft: 15522 corp: 26/1690b lim: 100 exec/s: 45 rss: 70Mb L: 39/99 MS: 1 ShuffleBytes- 00:08:14.038 [2024-07-13 19:56:01.521612] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1953184666628070171 len:6940 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.038 [2024-07-13 19:56:01.521644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.038 #46 NEW cov: 12157 ft: 15530 corp: 27/1725b lim: 100 exec/s: 46 rss: 70Mb L: 35/99 MS: 1 ChangeBit- 00:08:14.038 [2024-07-13 19:56:01.562305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.038 [2024-07-13 19:56:01.562338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.038 [2024-07-13 19:56:01.562453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:274877906944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.038 [2024-07-13 19:56:01.562475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.038 [2024-07-13 19:56:01.562599] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4611686018427387904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.038 [2024-07-13 19:56:01.562622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.038 #47 NEW cov: 12157 ft: 15539 corp: 28/1802b lim: 100 exec/s: 47 rss: 71Mb L: 77/99 MS: 1 PersAutoDict- DE: "\331\250\234\231w\264)\000"- 00:08:14.038 [2024-07-13 19:56:01.612408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4395513236481376256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.038 [2024-07-13 19:56:01.612440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.038 [2024-07-13 19:56:01.612528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.038 [2024-07-13 19:56:01.612555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.038 [2024-07-13 19:56:01.612677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.038 [2024-07-13 19:56:01.612700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.038 #48 NEW cov: 12157 ft: 15565 corp: 29/1863b lim: 100 exec/s: 48 rss: 71Mb L: 61/99 MS: 1 ChangeBinInt- 00:08:14.038 [2024-07-13 19:56:01.652409] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.038 [2024-07-13 19:56:01.652448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.038 [2024-07-13 19:56:01.652544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:274877906944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.038 [2024-07-13 19:56:01.652567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.038 [2024-07-13 19:56:01.652690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4611686018427387904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.038 [2024-07-13 19:56:01.652714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.038 #49 NEW cov: 12157 ft: 15574 corp: 30/1940b lim: 100 exec/s: 49 rss: 71Mb L: 77/99 MS: 1 ChangeBinInt- 00:08:14.298 [2024-07-13 19:56:01.703121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15697817502389754329 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.298 [2024-07-13 19:56:01.703151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.298 [2024-07-13 19:56:01.703220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.298 [2024-07-13 19:56:01.703243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.298 [2024-07-13 19:56:01.703370] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15697763367799871961 len:46122 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.298 [2024-07-13 19:56:01.703398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.298 [2024-07-13 19:56:01.703518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.298 [2024-07-13 19:56:01.703539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.298 [2024-07-13 19:56:01.703658] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:15697817505862638041 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.298 [2024-07-13 19:56:01.703683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:14.298 #50 NEW cov: 12157 ft: 15611 corp: 31/2040b lim: 100 exec/s: 50 rss: 71Mb L: 100/100 MS: 1 CopyPart- 00:08:14.298 [2024-07-13 19:56:01.753021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15697817502389754329 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.298 [2024-07-13 19:56:01.753049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.298 [2024-07-13 19:56:01.753137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.298 [2024-07-13 19:56:01.753164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.298 [2024-07-13 19:56:01.753292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:2954600884213479348 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.298 [2024-07-13 19:56:01.753316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.298 [2024-07-13 19:56:01.753438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.298 [2024-07-13 19:56:01.753480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.298 #51 NEW cov: 12157 ft: 15615 corp: 32/2133b lim: 100 exec/s: 51 rss: 71Mb L: 93/100 MS: 1 ShuffleBytes- 00:08:14.298 [2024-07-13 19:56:01.792865] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.299 [2024-07-13 19:56:01.792894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.299 [2024-07-13 19:56:01.792981] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:180388626432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.299 [2024-07-13 19:56:01.793000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.299 [2024-07-13 19:56:01.793117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.299 [2024-07-13 19:56:01.793145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.299 #52 NEW cov: 12157 ft: 15621 corp: 33/2195b lim: 100 exec/s: 52 rss: 71Mb L: 62/100 MS: 1 InsertByte- 00:08:14.299 [2024-07-13 19:56:01.833254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.299 [2024-07-13 19:56:01.833289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.299 [2024-07-13 19:56:01.833399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.299 [2024-07-13 19:56:01.833425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.299 [2024-07-13 19:56:01.833545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.299 [2024-07-13 19:56:01.833569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.299 [2024-07-13 19:56:01.833681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.299 [2024-07-13 19:56:01.833703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.299 #53 NEW cov: 12157 ft: 15628 corp: 34/2293b lim: 100 exec/s: 53 rss: 71Mb L: 98/100 MS: 1 CopyPart- 00:08:14.299 [2024-07-13 19:56:01.873388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.299 [2024-07-13 19:56:01.873421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.299 [2024-07-13 19:56:01.873527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:274877906944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.299 [2024-07-13 19:56:01.873548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.299 [2024-07-13 19:56:01.873666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.299 [2024-07-13 19:56:01.873688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.299 [2024-07-13 19:56:01.873808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5548434742217362765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.299 [2024-07-13 19:56:01.873827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.299 #54 NEW cov: 12157 ft: 15634 corp: 35/2387b lim: 100 exec/s: 54 rss: 71Mb L: 94/100 MS: 1 InsertRepeatedBytes- 00:08:14.299 [2024-07-13 19:56:01.922805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.299 [2024-07-13 19:56:01.922833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.299 #55 NEW cov: 12157 ft: 15647 corp: 36/2426b lim: 100 exec/s: 55 rss: 71Mb L: 39/100 MS: 1 CopyPart- 00:08:14.558 [2024-07-13 19:56:01.973758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:15697817502389754329 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.558 [2024-07-13 19:56:01.973787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.558 [2024-07-13 19:56:01.973863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:15697817505862638041 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.558 [2024-07-13 19:56:01.973885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.558 [2024-07-13 19:56:01.974003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15697817505862638041 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.558 [2024-07-13 19:56:01.974025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.558 [2024-07-13 19:56:01.974134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:15697817505862638041 len:55770 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.558 [2024-07-13 19:56:01.974158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:14.558 #56 NEW cov: 12157 ft: 15684 corp: 37/2519b lim: 100 exec/s: 56 rss: 71Mb L: 93/100 MS: 1 CopyPart- 00:08:14.558 [2024-07-13 19:56:02.013543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.558 [2024-07-13 19:56:02.013576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:14.558 [2024-07-13 19:56:02.013666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:180388626432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.558 [2024-07-13 19:56:02.013690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:14.558 [2024-07-13 19:56:02.013805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.558 [2024-07-13 19:56:02.013828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:14.558 #57 NEW cov: 12157 ft: 15693 corp: 38/2581b lim: 100 exec/s: 28 rss: 71Mb L: 62/100 MS: 1 ChangeBinInt- 00:08:14.558 #57 DONE cov: 12157 ft: 15693 corp: 38/2581b lim: 100 exec/s: 28 rss: 71Mb 00:08:14.558 ###### Recommended dictionary. ###### 00:08:14.558 "\331\250\234\231w\264)\000" # Uses: 3 00:08:14.558 ###### End of recommended dictionary. ###### 00:08:14.558 Done 57 runs in 2 second(s) 00:08:14.558 19:56:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:08:14.558 19:56:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:14.558 19:56:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:14.558 19:56:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:08:14.558 00:08:14.558 real 1m3.582s 00:08:14.558 user 1m39.115s 00:08:14.558 sys 0m7.920s 00:08:14.558 19:56:02 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:14.558 19:56:02 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:14.558 ************************************ 00:08:14.558 END TEST nvmf_fuzz 00:08:14.558 ************************************ 00:08:14.559 19:56:02 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:14.559 19:56:02 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:14.559 19:56:02 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:14.559 19:56:02 llvm_fuzz -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:14.559 19:56:02 llvm_fuzz -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:14.559 19:56:02 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:14.820 ************************************ 00:08:14.820 START TEST vfio_fuzz 00:08:14.820 ************************************ 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:14.820 * Looking for test storage... 00:08:14.820 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:14.820 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:14.821 #define SPDK_CONFIG_H 00:08:14.821 #define SPDK_CONFIG_APPS 1 00:08:14.821 #define SPDK_CONFIG_ARCH native 00:08:14.821 #undef SPDK_CONFIG_ASAN 00:08:14.821 #undef SPDK_CONFIG_AVAHI 00:08:14.821 #undef SPDK_CONFIG_CET 00:08:14.821 #define SPDK_CONFIG_COVERAGE 1 00:08:14.821 #define SPDK_CONFIG_CROSS_PREFIX 00:08:14.821 #undef SPDK_CONFIG_CRYPTO 00:08:14.821 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:14.821 #undef SPDK_CONFIG_CUSTOMOCF 00:08:14.821 #undef SPDK_CONFIG_DAOS 00:08:14.821 #define SPDK_CONFIG_DAOS_DIR 00:08:14.821 #define SPDK_CONFIG_DEBUG 1 00:08:14.821 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:14.821 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:08:14.821 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include 00:08:14.821 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:14.821 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:14.821 #undef SPDK_CONFIG_DPDK_UADK 00:08:14.821 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:14.821 #define SPDK_CONFIG_EXAMPLES 1 00:08:14.821 #undef SPDK_CONFIG_FC 00:08:14.821 #define SPDK_CONFIG_FC_PATH 00:08:14.821 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:14.821 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:14.821 #undef SPDK_CONFIG_FUSE 00:08:14.821 #define SPDK_CONFIG_FUZZER 1 00:08:14.821 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:14.821 #undef SPDK_CONFIG_GOLANG 00:08:14.821 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:14.821 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:14.821 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:14.821 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:14.821 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:14.821 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:14.821 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:14.821 #define SPDK_CONFIG_IDXD 1 00:08:14.821 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:14.821 #undef SPDK_CONFIG_IPSEC_MB 00:08:14.821 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:14.821 #define SPDK_CONFIG_ISAL 1 00:08:14.821 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:14.821 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:14.821 #define SPDK_CONFIG_LIBDIR 00:08:14.821 #undef SPDK_CONFIG_LTO 00:08:14.821 #define SPDK_CONFIG_MAX_LCORES 00:08:14.821 #define SPDK_CONFIG_NVME_CUSE 1 00:08:14.821 #undef SPDK_CONFIG_OCF 00:08:14.821 #define SPDK_CONFIG_OCF_PATH 00:08:14.821 #define SPDK_CONFIG_OPENSSL_PATH 00:08:14.821 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:14.821 #define SPDK_CONFIG_PGO_DIR 00:08:14.821 #undef SPDK_CONFIG_PGO_USE 00:08:14.821 #define SPDK_CONFIG_PREFIX /usr/local 00:08:14.821 #undef SPDK_CONFIG_RAID5F 00:08:14.821 #undef SPDK_CONFIG_RBD 00:08:14.821 #define SPDK_CONFIG_RDMA 1 00:08:14.821 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:14.821 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:14.821 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:14.821 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:14.821 #undef SPDK_CONFIG_SHARED 00:08:14.821 #undef SPDK_CONFIG_SMA 00:08:14.821 #define SPDK_CONFIG_TESTS 1 00:08:14.821 #undef SPDK_CONFIG_TSAN 00:08:14.821 #define SPDK_CONFIG_UBLK 1 00:08:14.821 #define SPDK_CONFIG_UBSAN 1 00:08:14.821 #undef SPDK_CONFIG_UNIT_TESTS 00:08:14.821 #undef SPDK_CONFIG_URING 00:08:14.821 #define SPDK_CONFIG_URING_PATH 00:08:14.821 #undef SPDK_CONFIG_URING_ZNS 00:08:14.821 #undef SPDK_CONFIG_USDT 00:08:14.821 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:14.821 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:14.821 #define SPDK_CONFIG_VFIO_USER 1 00:08:14.821 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:14.821 #define SPDK_CONFIG_VHOST 1 00:08:14.821 #define SPDK_CONFIG_VIRTIO 1 00:08:14.821 #undef SPDK_CONFIG_VTUNE 00:08:14.821 #define SPDK_CONFIG_VTUNE_DIR 00:08:14.821 #define SPDK_CONFIG_WERROR 1 00:08:14.821 #define SPDK_CONFIG_WPDK_DIR 00:08:14.821 #undef SPDK_CONFIG_XNVME 00:08:14.821 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- paths/export.sh@5 -- # export PATH 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # uname -s 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:14.821 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@57 -- # : 1 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@61 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@63 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@65 -- # : 1 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@67 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@69 -- # : 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@71 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@73 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@75 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@77 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@79 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@81 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@83 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@85 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@87 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@89 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@91 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@93 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@95 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@97 -- # : 1 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@99 -- # : 1 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@101 -- # : rdma 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@103 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@105 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@107 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@109 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@111 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@113 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@115 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@117 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@119 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@121 -- # : 1 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@125 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@127 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@129 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@131 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@133 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@135 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@137 -- # : v22.11.4 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@139 -- # : true 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@141 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@143 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@145 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@147 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@149 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@151 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@153 -- # : 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@155 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@157 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@159 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@161 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@163 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@166 -- # : 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@168 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@170 -- # : 0 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:14.822 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@199 -- # cat 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # export valgrind= 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # valgrind= 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@268 -- # uname -s 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@278 -- # MAKE=make 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j112 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@298 -- # TEST_MODE= 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@317 -- # [[ -z 3684911 ]] 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@317 -- # kill -0 3684911 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@330 -- # local mount target_dir 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.9lZzXn 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:08:14.823 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.9lZzXn/tests/vfio /tmp/spdk.9lZzXn 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@326 -- # df -T 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=954408960 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4330020864 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=53022830592 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742317568 00:08:15.083 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=8719486976 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30866448384 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871158784 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=12342484992 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348465152 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=5980160 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30870544384 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871158784 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=614400 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=6174224384 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174228480 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:08:15.084 * Looking for test storage... 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@367 -- # local target_space new_size 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # mount=/ 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@373 -- # target_space=53022830592 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # new_size=10934079488 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:15.084 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@388 -- # return 0 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1683 -- # true 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- ../common.sh@8 -- # pids=() 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- ../common.sh@70 -- # local time=1 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:15.084 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:15.084 19:56:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:15.084 [2024-07-13 19:56:02.568032] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:15.084 [2024-07-13 19:56:02.568100] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3684958 ] 00:08:15.084 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.084 [2024-07-13 19:56:02.638931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.084 [2024-07-13 19:56:02.678361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.345 INFO: Running with entropic power schedule (0xFF, 100). 00:08:15.345 INFO: Seed: 1434454927 00:08:15.345 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:08:15.345 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:08:15.345 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:15.345 INFO: A corpus is not provided, starting from an empty corpus 00:08:15.345 #2 INITED exec/s: 0 rss: 63Mb 00:08:15.345 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:15.345 This may also happen if the target rejected all inputs we tried so far 00:08:15.345 [2024-07-13 19:56:02.908987] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:08:15.864 NEW_FUNC[1/653]: 0x4933d0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:08:15.864 NEW_FUNC[2/653]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:15.864 #10 NEW cov: 10841 ft: 10894 corp: 2/7b lim: 6 exec/s: 0 rss: 68Mb L: 6/6 MS: 3 CrossOver-ChangeByte-InsertRepeatedBytes- 00:08:16.123 NEW_FUNC[1/3]: 0x142fb70 in cq_is_full /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:1723 00:08:16.123 NEW_FUNC[2/3]: 0x1430010 in cq_tail_advance /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:606 00:08:16.123 #14 NEW cov: 10952 ft: 14201 corp: 3/13b lim: 6 exec/s: 0 rss: 69Mb L: 6/6 MS: 4 CMP-ChangeByte-ShuffleBytes-InsertByte- DE: "\000\000\000\000"- 00:08:16.123 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:16.123 #15 NEW cov: 10969 ft: 15985 corp: 4/19b lim: 6 exec/s: 0 rss: 70Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:16.381 #16 NEW cov: 10969 ft: 16705 corp: 5/25b lim: 6 exec/s: 16 rss: 70Mb L: 6/6 MS: 1 CopyPart- 00:08:16.640 #22 NEW cov: 10969 ft: 16970 corp: 6/31b lim: 6 exec/s: 22 rss: 70Mb L: 6/6 MS: 1 ChangeByte- 00:08:16.640 #28 NEW cov: 10969 ft: 17159 corp: 7/37b lim: 6 exec/s: 28 rss: 70Mb L: 6/6 MS: 1 CopyPart- 00:08:16.899 #29 NEW cov: 10969 ft: 17201 corp: 8/43b lim: 6 exec/s: 29 rss: 70Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:17.159 #30 NEW cov: 10969 ft: 17523 corp: 9/49b lim: 6 exec/s: 30 rss: 70Mb L: 6/6 MS: 1 ShuffleBytes- 00:08:17.159 #31 NEW cov: 10976 ft: 17750 corp: 10/55b lim: 6 exec/s: 31 rss: 70Mb L: 6/6 MS: 1 CrossOver- 00:08:17.418 #32 pulse cov: 10976 ft: 18050 corp: 10/55b lim: 6 exec/s: 16 rss: 70Mb 00:08:17.418 #32 NEW cov: 10976 ft: 18050 corp: 11/61b lim: 6 exec/s: 16 rss: 70Mb L: 6/6 MS: 1 ChangeBit- 00:08:17.418 #32 DONE cov: 10976 ft: 18050 corp: 11/61b lim: 6 exec/s: 16 rss: 70Mb 00:08:17.418 ###### Recommended dictionary. ###### 00:08:17.418 "\000\000\000\000" # Uses: 1 00:08:17.418 ###### End of recommended dictionary. ###### 00:08:17.418 Done 32 runs in 2 second(s) 00:08:17.418 [2024-07-13 19:56:04.935630] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:17.678 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:17.678 19:56:05 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:17.678 [2024-07-13 19:56:05.218756] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:17.678 [2024-07-13 19:56:05.218827] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3685489 ] 00:08:17.678 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.678 [2024-07-13 19:56:05.291155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.678 [2024-07-13 19:56:05.328855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.937 INFO: Running with entropic power schedule (0xFF, 100). 00:08:17.937 INFO: Seed: 4080453047 00:08:17.937 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:08:17.937 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:08:17.937 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:17.937 INFO: A corpus is not provided, starting from an empty corpus 00:08:17.937 #2 INITED exec/s: 0 rss: 63Mb 00:08:17.937 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:17.937 This may also happen if the target rejected all inputs we tried so far 00:08:17.937 [2024-07-13 19:56:05.556284] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:08:18.196 [2024-07-13 19:56:05.599502] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:18.196 [2024-07-13 19:56:05.599529] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:18.196 [2024-07-13 19:56:05.599547] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:18.455 NEW_FUNC[1/658]: 0x493970 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:08:18.455 NEW_FUNC[2/658]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:18.455 #6 NEW cov: 10917 ft: 10682 corp: 2/5b lim: 4 exec/s: 0 rss: 70Mb L: 4/4 MS: 4 InsertByte-InsertByte-ChangeByte-CopyPart- 00:08:18.455 [2024-07-13 19:56:06.074741] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:18.455 [2024-07-13 19:56:06.074778] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:18.455 [2024-07-13 19:56:06.074795] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:18.715 #12 NEW cov: 10931 ft: 14437 corp: 3/9b lim: 4 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:18.715 [2024-07-13 19:56:06.244139] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:18.715 [2024-07-13 19:56:06.244161] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:18.715 [2024-07-13 19:56:06.244179] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:18.715 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:18.715 #13 NEW cov: 10948 ft: 15928 corp: 4/13b lim: 4 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ChangeByte- 00:08:18.974 [2024-07-13 19:56:06.415271] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:18.974 [2024-07-13 19:56:06.415294] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:18.974 [2024-07-13 19:56:06.415311] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:18.974 #20 NEW cov: 10948 ft: 16386 corp: 5/17b lim: 4 exec/s: 0 rss: 71Mb L: 4/4 MS: 2 EraseBytes-InsertByte- 00:08:18.974 [2024-07-13 19:56:06.590372] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:18.974 [2024-07-13 19:56:06.590394] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:18.974 [2024-07-13 19:56:06.590411] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:19.233 #24 NEW cov: 10948 ft: 17368 corp: 6/21b lim: 4 exec/s: 24 rss: 71Mb L: 4/4 MS: 4 CrossOver-ChangeByte-ChangeByte-InsertByte- 00:08:19.233 [2024-07-13 19:56:06.769858] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:19.233 [2024-07-13 19:56:06.769880] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:19.233 [2024-07-13 19:56:06.769897] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:19.233 #25 NEW cov: 10948 ft: 17717 corp: 7/25b lim: 4 exec/s: 25 rss: 71Mb L: 4/4 MS: 1 ShuffleBytes- 00:08:19.492 [2024-07-13 19:56:06.936927] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:19.492 [2024-07-13 19:56:06.936949] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:19.492 [2024-07-13 19:56:06.936966] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:19.492 #26 NEW cov: 10948 ft: 17749 corp: 8/29b lim: 4 exec/s: 26 rss: 71Mb L: 4/4 MS: 1 CopyPart- 00:08:19.492 [2024-07-13 19:56:07.104108] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:19.492 [2024-07-13 19:56:07.104130] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:19.492 [2024-07-13 19:56:07.104146] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:19.752 #27 NEW cov: 10948 ft: 18108 corp: 9/33b lim: 4 exec/s: 27 rss: 71Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:19.752 [2024-07-13 19:56:07.270241] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:19.752 [2024-07-13 19:56:07.270266] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:19.752 [2024-07-13 19:56:07.270282] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:19.752 #28 NEW cov: 10955 ft: 18178 corp: 10/37b lim: 4 exec/s: 28 rss: 72Mb L: 4/4 MS: 1 CopyPart- 00:08:20.012 [2024-07-13 19:56:07.437556] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:20.012 [2024-07-13 19:56:07.437578] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:20.012 [2024-07-13 19:56:07.437595] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:20.012 #34 NEW cov: 10955 ft: 18193 corp: 11/41b lim: 4 exec/s: 17 rss: 72Mb L: 4/4 MS: 1 ChangeBit- 00:08:20.012 #34 DONE cov: 10955 ft: 18193 corp: 11/41b lim: 4 exec/s: 17 rss: 72Mb 00:08:20.012 Done 34 runs in 2 second(s) 00:08:20.012 [2024-07-13 19:56:07.558659] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:20.272 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:20.272 19:56:07 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:20.272 [2024-07-13 19:56:07.839744] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:20.272 [2024-07-13 19:56:07.839816] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3685905 ] 00:08:20.272 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.272 [2024-07-13 19:56:07.912425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.530 [2024-07-13 19:56:07.950869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.530 INFO: Running with entropic power schedule (0xFF, 100). 00:08:20.530 INFO: Seed: 2409498000 00:08:20.530 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:08:20.530 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:08:20.530 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:20.530 INFO: A corpus is not provided, starting from an empty corpus 00:08:20.530 #2 INITED exec/s: 0 rss: 63Mb 00:08:20.530 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:20.530 This may also happen if the target rejected all inputs we tried so far 00:08:20.530 [2024-07-13 19:56:08.179459] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:08:20.787 [2024-07-13 19:56:08.227522] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:21.046 NEW_FUNC[1/657]: 0x494350 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:08:21.046 NEW_FUNC[2/657]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:21.046 #3 NEW cov: 10903 ft: 10852 corp: 2/9b lim: 8 exec/s: 0 rss: 68Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:08:21.046 [2024-07-13 19:56:08.706888] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:21.305 #6 NEW cov: 10917 ft: 13620 corp: 3/17b lim: 8 exec/s: 0 rss: 70Mb L: 8/8 MS: 3 ChangeBit-InsertRepeatedBytes-CrossOver- 00:08:21.305 [2024-07-13 19:56:08.889108] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:21.563 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:21.563 #12 NEW cov: 10934 ft: 14167 corp: 4/25b lim: 8 exec/s: 0 rss: 70Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:21.563 [2024-07-13 19:56:09.062969] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:21.563 #13 NEW cov: 10934 ft: 16067 corp: 5/33b lim: 8 exec/s: 13 rss: 70Mb L: 8/8 MS: 1 CrossOver- 00:08:21.822 [2024-07-13 19:56:09.231117] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:21.822 #14 NEW cov: 10934 ft: 16430 corp: 6/41b lim: 8 exec/s: 14 rss: 70Mb L: 8/8 MS: 1 ChangeByte- 00:08:21.822 [2024-07-13 19:56:09.400389] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:22.081 #15 NEW cov: 10934 ft: 16960 corp: 7/49b lim: 8 exec/s: 15 rss: 70Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:22.081 [2024-07-13 19:56:09.570196] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:22.081 #16 NEW cov: 10934 ft: 17103 corp: 8/57b lim: 8 exec/s: 16 rss: 70Mb L: 8/8 MS: 1 ChangeBit- 00:08:22.081 [2024-07-13 19:56:09.740448] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:22.340 #17 NEW cov: 10934 ft: 17654 corp: 9/65b lim: 8 exec/s: 17 rss: 70Mb L: 8/8 MS: 1 CopyPart- 00:08:22.340 [2024-07-13 19:56:09.912394] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:22.600 #18 NEW cov: 10941 ft: 17839 corp: 10/73b lim: 8 exec/s: 18 rss: 70Mb L: 8/8 MS: 1 ShuffleBytes- 00:08:22.600 [2024-07-13 19:56:10.090497] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:22.600 #19 NEW cov: 10941 ft: 17941 corp: 11/81b lim: 8 exec/s: 9 rss: 70Mb L: 8/8 MS: 1 ChangeBit- 00:08:22.600 #19 DONE cov: 10941 ft: 17941 corp: 11/81b lim: 8 exec/s: 9 rss: 70Mb 00:08:22.600 Done 19 runs in 2 second(s) 00:08:22.600 [2024-07-13 19:56:10.212644] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:22.859 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:22.859 19:56:10 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:22.859 [2024-07-13 19:56:10.496895] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:22.859 [2024-07-13 19:56:10.496989] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3686314 ] 00:08:23.117 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.118 [2024-07-13 19:56:10.570184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.118 [2024-07-13 19:56:10.609224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.376 INFO: Running with entropic power schedule (0xFF, 100). 00:08:23.376 INFO: Seed: 778520271 00:08:23.376 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:08:23.376 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:08:23.376 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:23.376 INFO: A corpus is not provided, starting from an empty corpus 00:08:23.376 #2 INITED exec/s: 0 rss: 63Mb 00:08:23.376 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:23.376 This may also happen if the target rejected all inputs we tried so far 00:08:23.376 [2024-07-13 19:56:10.848563] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:08:23.635 NEW_FUNC[1/655]: 0x494a30 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:08:23.635 NEW_FUNC[2/655]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:23.635 #80 NEW cov: 10910 ft: 10830 corp: 2/33b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 3 InsertRepeatedBytes-ChangeByte-CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:23.894 NEW_FUNC[1/2]: 0xf26580 in spdk_get_ticks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:296 00:08:23.894 NEW_FUNC[2/2]: 0xf265e0 in rte_get_timer_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build/include/generic/rte_cycles.h:94 00:08:23.894 #81 NEW cov: 10929 ft: 13935 corp: 3/65b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:24.154 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:24.154 #82 NEW cov: 10946 ft: 16357 corp: 4/97b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 CMP- DE: "H\005\234\005\000\000\000\000"- 00:08:24.413 #83 NEW cov: 10946 ft: 16887 corp: 5/129b lim: 32 exec/s: 83 rss: 71Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:24.413 #84 NEW cov: 10946 ft: 17323 corp: 6/161b lim: 32 exec/s: 84 rss: 71Mb L: 32/32 MS: 1 PersAutoDict- DE: "H\005\234\005\000\000\000\000"- 00:08:24.671 #85 NEW cov: 10946 ft: 17456 corp: 7/193b lim: 32 exec/s: 85 rss: 71Mb L: 32/32 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:24.930 #91 NEW cov: 10946 ft: 17728 corp: 8/225b lim: 32 exec/s: 91 rss: 71Mb L: 32/32 MS: 1 ChangeBit- 00:08:24.930 #92 NEW cov: 10946 ft: 17783 corp: 9/257b lim: 32 exec/s: 92 rss: 71Mb L: 32/32 MS: 1 ChangeASCIIInt- 00:08:25.188 #93 NEW cov: 10953 ft: 17804 corp: 10/289b lim: 32 exec/s: 93 rss: 72Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:25.446 #94 NEW cov: 10953 ft: 17867 corp: 11/321b lim: 32 exec/s: 47 rss: 72Mb L: 32/32 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:25.446 #94 DONE cov: 10953 ft: 17867 corp: 11/321b lim: 32 exec/s: 47 rss: 72Mb 00:08:25.446 ###### Recommended dictionary. ###### 00:08:25.446 "\377\377\377\377\377\377\377\377" # Uses: 2 00:08:25.446 "H\005\234\005\000\000\000\000" # Uses: 1 00:08:25.446 ###### End of recommended dictionary. ###### 00:08:25.446 Done 94 runs in 2 second(s) 00:08:25.446 [2024-07-13 19:56:12.921639] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:25.705 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:25.705 19:56:13 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:25.705 [2024-07-13 19:56:13.200217] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:25.705 [2024-07-13 19:56:13.200291] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3686847 ] 00:08:25.705 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.705 [2024-07-13 19:56:13.271147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.705 [2024-07-13 19:56:13.308706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.963 INFO: Running with entropic power schedule (0xFF, 100). 00:08:25.963 INFO: Seed: 3471519871 00:08:25.963 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:08:25.963 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:08:25.963 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:25.963 INFO: A corpus is not provided, starting from an empty corpus 00:08:25.963 #2 INITED exec/s: 0 rss: 63Mb 00:08:25.963 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:25.963 This may also happen if the target rejected all inputs we tried so far 00:08:25.963 [2024-07-13 19:56:13.540237] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:26.480 NEW_FUNC[1/656]: 0x4952b0 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:26.480 NEW_FUNC[2/656]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:26.480 #22 NEW cov: 10911 ft: 10881 corp: 2/33b lim: 32 exec/s: 0 rss: 69Mb L: 32/32 MS: 5 CrossOver-InsertRepeatedBytes-CMP-ShuffleBytes-CopyPart- DE: "!\000"- 00:08:26.739 NEW_FUNC[1/1]: 0x14108d0 in cq_tailp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:586 00:08:26.739 #43 NEW cov: 10927 ft: 13806 corp: 3/65b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ChangeByte- 00:08:26.739 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:26.739 #44 NEW cov: 10944 ft: 15147 corp: 4/97b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 1 ChangeByte- 00:08:27.017 #45 NEW cov: 10944 ft: 15943 corp: 5/129b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:27.017 #51 NEW cov: 10944 ft: 16178 corp: 6/161b lim: 32 exec/s: 51 rss: 71Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:27.275 #52 NEW cov: 10944 ft: 16477 corp: 7/193b lim: 32 exec/s: 52 rss: 71Mb L: 32/32 MS: 1 ChangeByte- 00:08:27.533 #53 NEW cov: 10944 ft: 16771 corp: 8/225b lim: 32 exec/s: 53 rss: 71Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:27.533 #54 NEW cov: 10944 ft: 17340 corp: 9/257b lim: 32 exec/s: 54 rss: 71Mb L: 32/32 MS: 1 ChangeBit- 00:08:27.791 #65 NEW cov: 10951 ft: 17612 corp: 10/289b lim: 32 exec/s: 65 rss: 71Mb L: 32/32 MS: 1 PersAutoDict- DE: "!\000"- 00:08:28.050 #66 NEW cov: 10951 ft: 17942 corp: 11/321b lim: 32 exec/s: 66 rss: 71Mb L: 32/32 MS: 1 ChangeByte- 00:08:28.050 #67 NEW cov: 10951 ft: 17975 corp: 12/353b lim: 32 exec/s: 33 rss: 71Mb L: 32/32 MS: 1 ChangeBit- 00:08:28.050 #67 DONE cov: 10951 ft: 17975 corp: 12/353b lim: 32 exec/s: 33 rss: 71Mb 00:08:28.050 ###### Recommended dictionary. ###### 00:08:28.050 "!\000" # Uses: 1 00:08:28.050 ###### End of recommended dictionary. ###### 00:08:28.050 Done 67 runs in 2 second(s) 00:08:28.050 [2024-07-13 19:56:15.673631] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:28.310 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:28.310 19:56:15 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:28.310 [2024-07-13 19:56:15.951864] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:28.310 [2024-07-13 19:56:15.951935] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3687388 ] 00:08:28.569 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.569 [2024-07-13 19:56:16.021403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.569 [2024-07-13 19:56:16.058597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.827 INFO: Running with entropic power schedule (0xFF, 100). 00:08:28.827 INFO: Seed: 1924565738 00:08:28.827 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:08:28.827 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:08:28.827 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:28.827 INFO: A corpus is not provided, starting from an empty corpus 00:08:28.827 #2 INITED exec/s: 0 rss: 63Mb 00:08:28.827 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:28.827 This may also happen if the target rejected all inputs we tried so far 00:08:28.827 [2024-07-13 19:56:16.284618] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:08:28.827 [2024-07-13 19:56:16.328494] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:28.827 [2024-07-13 19:56:16.328531] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:29.086 NEW_FUNC[1/658]: 0x495cb0 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:08:29.086 NEW_FUNC[2/658]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:29.086 #86 NEW cov: 10922 ft: 10743 corp: 2/14b lim: 13 exec/s: 0 rss: 69Mb L: 13/13 MS: 4 ShuffleBytes-InsertRepeatedBytes-InsertByte-InsertByte- 00:08:29.344 [2024-07-13 19:56:16.781244] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:29.344 [2024-07-13 19:56:16.781290] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:29.344 #87 NEW cov: 10936 ft: 13686 corp: 3/27b lim: 13 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ChangeBit- 00:08:29.344 [2024-07-13 19:56:16.951597] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:29.344 [2024-07-13 19:56:16.951628] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:29.602 #88 NEW cov: 10936 ft: 15242 corp: 4/40b lim: 13 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:29.602 [2024-07-13 19:56:17.117689] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:29.602 [2024-07-13 19:56:17.117723] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:29.602 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:29.602 #89 NEW cov: 10953 ft: 15898 corp: 5/53b lim: 13 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 ChangeBit- 00:08:29.861 [2024-07-13 19:56:17.284091] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:29.861 [2024-07-13 19:56:17.284123] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:29.861 #90 NEW cov: 10953 ft: 16462 corp: 6/66b lim: 13 exec/s: 90 rss: 70Mb L: 13/13 MS: 1 ChangeByte- 00:08:29.861 [2024-07-13 19:56:17.448245] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:29.861 [2024-07-13 19:56:17.448275] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:30.119 #96 NEW cov: 10953 ft: 16902 corp: 7/79b lim: 13 exec/s: 96 rss: 70Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:30.119 [2024-07-13 19:56:17.615378] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:30.119 [2024-07-13 19:56:17.615408] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:30.119 #102 NEW cov: 10953 ft: 16944 corp: 8/92b lim: 13 exec/s: 102 rss: 70Mb L: 13/13 MS: 1 ChangeByte- 00:08:30.378 [2024-07-13 19:56:17.781609] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:30.378 [2024-07-13 19:56:17.781639] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:30.378 #103 NEW cov: 10953 ft: 17357 corp: 9/105b lim: 13 exec/s: 103 rss: 70Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:30.378 [2024-07-13 19:56:17.948931] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:30.378 [2024-07-13 19:56:17.948960] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:30.637 #104 NEW cov: 10953 ft: 17567 corp: 10/118b lim: 13 exec/s: 104 rss: 70Mb L: 13/13 MS: 1 ShuffleBytes- 00:08:30.637 [2024-07-13 19:56:18.116607] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:30.637 [2024-07-13 19:56:18.116637] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:30.637 #105 NEW cov: 10960 ft: 17675 corp: 11/131b lim: 13 exec/s: 105 rss: 70Mb L: 13/13 MS: 1 ChangeBinInt- 00:08:30.637 [2024-07-13 19:56:18.284404] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:30.637 [2024-07-13 19:56:18.284434] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:30.896 #106 NEW cov: 10960 ft: 17715 corp: 12/144b lim: 13 exec/s: 53 rss: 71Mb L: 13/13 MS: 1 ChangeBit- 00:08:30.896 #106 DONE cov: 10960 ft: 17715 corp: 12/144b lim: 13 exec/s: 53 rss: 71Mb 00:08:30.896 Done 106 runs in 2 second(s) 00:08:30.896 [2024-07-13 19:56:18.400641] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:31.155 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:31.155 19:56:18 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:31.155 [2024-07-13 19:56:18.681039] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:31.155 [2024-07-13 19:56:18.681109] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3687857 ] 00:08:31.155 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.155 [2024-07-13 19:56:18.752977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.155 [2024-07-13 19:56:18.791629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.414 INFO: Running with entropic power schedule (0xFF, 100). 00:08:31.414 INFO: Seed: 364584435 00:08:31.414 INFO: Loaded 1 modules (354499 inline 8-bit counters): 354499 [0x27a440c, 0x27faccf), 00:08:31.414 INFO: Loaded 1 PC tables (354499 PCs): 354499 [0x27facd0,0x2d63900), 00:08:31.414 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:31.414 INFO: A corpus is not provided, starting from an empty corpus 00:08:31.414 #2 INITED exec/s: 0 rss: 63Mb 00:08:31.414 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:31.414 This may also happen if the target rejected all inputs we tried so far 00:08:31.414 [2024-07-13 19:56:19.019547] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:08:31.414 [2024-07-13 19:56:19.055477] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:31.414 [2024-07-13 19:56:19.055531] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:31.931 NEW_FUNC[1/656]: 0x4969a0 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:31.931 NEW_FUNC[2/656]: 0x498ee0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:31.931 #3 NEW cov: 10906 ft: 10882 corp: 2/10b lim: 9 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:08:31.931 [2024-07-13 19:56:19.524953] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:31.931 [2024-07-13 19:56:19.524990] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:32.190 NEW_FUNC[1/2]: 0x16d48d0 in _is_io_flags_valid /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ns_cmd.c:141 00:08:32.190 NEW_FUNC[2/2]: 0x16f1300 in _nvme_md_excluded_from_xfer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ns_cmd.c:54 00:08:32.190 #4 NEW cov: 10928 ft: 13511 corp: 3/19b lim: 9 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 CopyPart- 00:08:32.190 [2024-07-13 19:56:19.701613] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:32.190 [2024-07-13 19:56:19.701651] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:32.190 NEW_FUNC[1/1]: 0x1a4ec20 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:32.190 #15 NEW cov: 10945 ft: 14194 corp: 4/28b lim: 9 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:08:32.449 [2024-07-13 19:56:19.888198] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:32.449 [2024-07-13 19:56:19.888229] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:32.449 #21 NEW cov: 10945 ft: 14881 corp: 5/37b lim: 9 exec/s: 21 rss: 70Mb L: 9/9 MS: 1 ChangeBit- 00:08:32.449 [2024-07-13 19:56:20.068967] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:32.449 [2024-07-13 19:56:20.069002] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:32.708 #22 NEW cov: 10945 ft: 15849 corp: 6/46b lim: 9 exec/s: 22 rss: 70Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:08:32.708 [2024-07-13 19:56:20.248777] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:32.708 [2024-07-13 19:56:20.248809] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:32.708 #23 NEW cov: 10945 ft: 16081 corp: 7/55b lim: 9 exec/s: 23 rss: 70Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:32.968 [2024-07-13 19:56:20.426448] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:32.968 [2024-07-13 19:56:20.426479] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:32.968 #24 NEW cov: 10945 ft: 16215 corp: 8/64b lim: 9 exec/s: 24 rss: 70Mb L: 9/9 MS: 1 ChangeBit- 00:08:32.968 [2024-07-13 19:56:20.607262] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:32.968 [2024-07-13 19:56:20.607292] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:33.227 #25 NEW cov: 10945 ft: 16458 corp: 9/73b lim: 9 exec/s: 25 rss: 70Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:33.227 [2024-07-13 19:56:20.786875] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:33.227 [2024-07-13 19:56:20.786905] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:33.486 #31 NEW cov: 10952 ft: 16531 corp: 10/82b lim: 9 exec/s: 31 rss: 70Mb L: 9/9 MS: 1 CrossOver- 00:08:33.486 [2024-07-13 19:56:20.969372] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:33.486 [2024-07-13 19:56:20.969402] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:33.486 #32 pulse cov: 10952 ft: 16714 corp: 10/82b lim: 9 exec/s: 16 rss: 70Mb 00:08:33.486 #32 NEW cov: 10952 ft: 16714 corp: 11/91b lim: 9 exec/s: 16 rss: 70Mb L: 9/9 MS: 1 CopyPart- 00:08:33.486 #32 DONE cov: 10952 ft: 16714 corp: 11/91b lim: 9 exec/s: 16 rss: 70Mb 00:08:33.486 ###### Recommended dictionary. ###### 00:08:33.486 "\001\000\000\000\000\000\000\000" # Uses: 1 00:08:33.486 ###### End of recommended dictionary. ###### 00:08:33.486 Done 32 runs in 2 second(s) 00:08:33.486 [2024-07-13 19:56:21.092635] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:08:33.746 19:56:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:08:33.746 19:56:21 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:33.746 19:56:21 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:33.746 19:56:21 llvm_fuzz.vfio_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:08:33.746 00:08:33.746 real 0m19.100s 00:08:33.746 user 0m26.775s 00:08:33.746 sys 0m1.819s 00:08:33.746 19:56:21 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.746 19:56:21 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:33.746 ************************************ 00:08:33.746 END TEST vfio_fuzz 00:08:33.746 ************************************ 00:08:33.746 19:56:21 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:08:33.746 00:08:33.746 real 1m22.956s 00:08:33.746 user 2m5.975s 00:08:33.746 sys 0m9.952s 00:08:33.746 19:56:21 llvm_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.746 19:56:21 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:33.746 ************************************ 00:08:33.746 END TEST llvm_fuzz 00:08:33.746 ************************************ 00:08:34.005 19:56:21 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:08:34.005 19:56:21 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:08:34.005 19:56:21 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:08:34.005 19:56:21 -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:34.005 19:56:21 -- common/autotest_common.sh@10 -- # set +x 00:08:34.005 19:56:21 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:08:34.005 19:56:21 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:08:34.005 19:56:21 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:08:34.005 19:56:21 -- common/autotest_common.sh@10 -- # set +x 00:08:40.635 INFO: APP EXITING 00:08:40.635 INFO: killing all VMs 00:08:40.635 INFO: killing vhost app 00:08:40.635 INFO: EXIT DONE 00:08:43.171 Waiting for block devices as requested 00:08:43.171 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:43.171 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:43.431 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:43.431 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:43.431 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:43.691 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:43.691 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:43.691 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:43.950 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:43.950 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:43.950 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:44.209 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:44.209 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:44.209 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:44.209 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:44.468 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:44.468 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:47.757 Cleaning 00:08:47.757 Removing: /dev/shm/spdk_tgt_trace.pid3654388 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3651943 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3653031 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3654388 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3654840 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3655878 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3655939 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3657054 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3657060 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3657481 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3657797 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3657902 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3658194 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3658507 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3658697 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3658862 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3659147 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3659998 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3663522 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3663743 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3664043 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3664059 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3664613 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3664659 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3665229 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3665387 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3665617 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3665736 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3665863 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3666025 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3666417 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3666700 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3666980 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3667056 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3667350 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3667373 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3667529 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3667730 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3668009 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3668300 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3668580 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3668861 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3669146 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3669422 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3669601 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3669822 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3670038 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3670313 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3670601 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3670883 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3671169 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3671455 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3671736 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3671969 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3672177 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3672364 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3672629 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3672902 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3673031 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3673536 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3674034 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3674518 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3674855 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3675380 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3675818 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3676204 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3676733 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3677057 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3677557 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3678035 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3678384 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3678917 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3679398 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3679738 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3680272 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3680807 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3681117 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3681634 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3682164 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3682516 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3682985 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3683518 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3683943 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3684343 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3684958 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3685489 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3685905 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3686314 00:08:47.757 Removing: /var/run/dpdk/spdk_pid3686847 00:08:48.015 Removing: /var/run/dpdk/spdk_pid3687388 00:08:48.015 Removing: /var/run/dpdk/spdk_pid3687857 00:08:48.015 Clean 00:08:48.015 19:56:35 -- common/autotest_common.sh@1447 -- # return 0 00:08:48.015 19:56:35 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:08:48.015 19:56:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.015 19:56:35 -- common/autotest_common.sh@10 -- # set +x 00:08:48.015 19:56:35 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:08:48.015 19:56:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.015 19:56:35 -- common/autotest_common.sh@10 -- # set +x 00:08:48.015 19:56:35 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:48.015 19:56:35 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:48.015 19:56:35 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:48.015 19:56:35 -- spdk/autotest.sh@391 -- # hash lcov 00:08:48.015 19:56:35 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:08:48.015 19:56:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:48.015 19:56:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:48.015 19:56:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.015 19:56:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.015 19:56:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.015 19:56:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.015 19:56:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.015 19:56:35 -- paths/export.sh@5 -- $ export PATH 00:08:48.015 19:56:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.015 19:56:35 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:48.273 19:56:35 -- common/autobuild_common.sh@437 -- $ date +%s 00:08:48.273 19:56:35 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720893395.XXXXXX 00:08:48.273 19:56:35 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720893395.xPcit1 00:08:48.273 19:56:35 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:08:48.273 19:56:35 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:08:48.273 19:56:35 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build 00:08:48.273 19:56:35 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk' 00:08:48.273 19:56:35 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:48.273 19:56:35 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/dpdk --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:48.273 19:56:35 -- common/autobuild_common.sh@453 -- $ get_config_params 00:08:48.273 19:56:35 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:08:48.273 19:56:35 -- common/autotest_common.sh@10 -- $ set +x 00:08:48.273 19:56:35 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/short-fuzz-phy-autotest/dpdk/build --with-vfio-user' 00:08:48.273 19:56:35 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:08:48.273 19:56:35 -- pm/common@17 -- $ local monitor 00:08:48.273 19:56:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:48.273 19:56:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:48.273 19:56:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:48.273 19:56:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:48.273 19:56:35 -- pm/common@25 -- $ sleep 1 00:08:48.273 19:56:35 -- pm/common@21 -- $ date +%s 00:08:48.273 19:56:35 -- pm/common@21 -- $ date +%s 00:08:48.273 19:56:35 -- pm/common@21 -- $ date +%s 00:08:48.273 19:56:35 -- pm/common@21 -- $ date +%s 00:08:48.273 19:56:35 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720893395 00:08:48.273 19:56:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720893395 00:08:48.273 19:56:35 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720893395 00:08:48.273 19:56:35 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720893395 00:08:48.273 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720893395_collect-vmstat.pm.log 00:08:48.273 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720893395_collect-cpu-load.pm.log 00:08:48.273 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720893395_collect-cpu-temp.pm.log 00:08:48.273 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720893395_collect-bmc-pm.bmc.pm.log 00:08:49.207 19:56:36 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:08:49.207 19:56:36 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:08:49.207 19:56:36 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:49.207 19:56:36 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:08:49.207 19:56:36 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:08:49.207 19:56:36 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:08:49.207 19:56:36 -- spdk/autopackage.sh@19 -- $ timing_finish 00:08:49.207 19:56:36 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:49.207 19:56:36 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:08:49.207 19:56:36 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:49.207 19:56:36 -- spdk/autopackage.sh@20 -- $ exit 0 00:08:49.207 19:56:36 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:08:49.207 19:56:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:49.207 19:56:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:49.207 19:56:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:49.207 19:56:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:49.207 19:56:36 -- pm/common@44 -- $ pid=3694942 00:08:49.207 19:56:36 -- pm/common@50 -- $ kill -TERM 3694942 00:08:49.207 19:56:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:49.207 19:56:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:49.207 19:56:36 -- pm/common@44 -- $ pid=3694944 00:08:49.207 19:56:36 -- pm/common@50 -- $ kill -TERM 3694944 00:08:49.207 19:56:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:49.207 19:56:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:49.207 19:56:36 -- pm/common@44 -- $ pid=3694945 00:08:49.207 19:56:36 -- pm/common@50 -- $ kill -TERM 3694945 00:08:49.207 19:56:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:49.207 19:56:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:49.207 19:56:36 -- pm/common@44 -- $ pid=3694978 00:08:49.207 19:56:36 -- pm/common@50 -- $ sudo -E kill -TERM 3694978 00:08:49.207 + [[ -n 3532738 ]] 00:08:49.207 + sudo kill 3532738 00:08:49.215 [Pipeline] } 00:08:49.228 [Pipeline] // stage 00:08:49.232 [Pipeline] } 00:08:49.244 [Pipeline] // timeout 00:08:49.248 [Pipeline] } 00:08:49.259 [Pipeline] // catchError 00:08:49.263 [Pipeline] } 00:08:49.275 [Pipeline] // wrap 00:08:49.280 [Pipeline] } 00:08:49.290 [Pipeline] // catchError 00:08:49.297 [Pipeline] stage 00:08:49.298 [Pipeline] { (Epilogue) 00:08:49.308 [Pipeline] catchError 00:08:49.309 [Pipeline] { 00:08:49.318 [Pipeline] echo 00:08:49.319 Cleanup processes 00:08:49.323 [Pipeline] sh 00:08:49.604 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:49.604 3608131 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720893071 00:08:49.604 3608163 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720893071 00:08:49.604 3695103 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:08:49.604 3695926 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:49.617 [Pipeline] sh 00:08:49.900 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:49.900 ++ grep -v 'sudo pgrep' 00:08:49.900 ++ awk '{print $1}' 00:08:49.900 + sudo kill -9 3608131 3608163 3695103 00:08:49.912 [Pipeline] sh 00:08:50.197 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:50.197 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:08:50.197 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:08:51.586 [Pipeline] sh 00:08:51.869 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:51.869 Artifacts sizes are good 00:08:51.883 [Pipeline] archiveArtifacts 00:08:51.890 Archiving artifacts 00:08:51.946 [Pipeline] sh 00:08:52.230 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:52.244 [Pipeline] cleanWs 00:08:52.253 [WS-CLEANUP] Deleting project workspace... 00:08:52.253 [WS-CLEANUP] Deferred wipeout is used... 00:08:52.259 [WS-CLEANUP] done 00:08:52.261 [Pipeline] } 00:08:52.281 [Pipeline] // catchError 00:08:52.293 [Pipeline] sh 00:08:52.601 + logger -p user.info -t JENKINS-CI 00:08:52.609 [Pipeline] } 00:08:52.630 [Pipeline] // stage 00:08:52.637 [Pipeline] } 00:08:52.654 [Pipeline] // node 00:08:52.662 [Pipeline] End of Pipeline 00:08:52.692 Finished: SUCCESS