00:00:00.001 Started by upstream project "spdk-dpdk-per-patch" build number 235 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.028 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.028 The recommended git tool is: git 00:00:00.029 using credential 00000000-0000-0000-0000-000000000002 00:00:00.030 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.045 Fetching changes from the remote Git repository 00:00:00.047 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.067 Using shallow fetch with depth 1 00:00:00.067 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.067 > git --version # timeout=10 00:00:00.111 > git --version # 'git version 2.39.2' 00:00:00.111 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.112 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.112 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.822 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.834 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.846 Checking out Revision f620ee97e10840540f53609861ee9b86caa3c192 (FETCH_HEAD) 00:00:02.846 > git config core.sparsecheckout # timeout=10 00:00:02.858 > git read-tree -mu HEAD # timeout=10 00:00:02.874 > git checkout -f f620ee97e10840540f53609861ee9b86caa3c192 # timeout=5 00:00:02.891 Commit message: "change IP of vertiv1 PDU" 00:00:02.891 > git rev-list --no-walk f620ee97e10840540f53609861ee9b86caa3c192 # timeout=10 00:00:02.966 [Pipeline] Start of Pipeline 00:00:02.976 [Pipeline] library 00:00:02.977 Loading library shm_lib@master 00:00:02.977 Library shm_lib@master is cached. Copying from home. 00:00:02.989 [Pipeline] node 00:00:02.999 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.000 [Pipeline] { 00:00:03.009 [Pipeline] catchError 00:00:03.010 [Pipeline] { 00:00:03.021 [Pipeline] wrap 00:00:03.029 [Pipeline] { 00:00:03.035 [Pipeline] stage 00:00:03.036 [Pipeline] { (Prologue) 00:00:03.219 [Pipeline] sh 00:00:03.499 + logger -p user.info -t JENKINS-CI 00:00:03.520 [Pipeline] echo 00:00:03.521 Node: WFP20 00:00:03.529 [Pipeline] sh 00:00:03.823 [Pipeline] setCustomBuildProperty 00:00:03.835 [Pipeline] echo 00:00:03.836 Cleanup processes 00:00:03.838 [Pipeline] sh 00:00:04.120 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.120 3501711 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.130 [Pipeline] sh 00:00:04.409 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.409 ++ grep -v 'sudo pgrep' 00:00:04.409 ++ awk '{print $1}' 00:00:04.409 + sudo kill -9 00:00:04.409 + true 00:00:04.424 [Pipeline] cleanWs 00:00:04.433 [WS-CLEANUP] Deleting project workspace... 00:00:04.433 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.438 [WS-CLEANUP] done 00:00:04.441 [Pipeline] setCustomBuildProperty 00:00:04.453 [Pipeline] sh 00:00:04.730 + sudo git config --global --replace-all safe.directory '*' 00:00:04.786 [Pipeline] nodesByLabel 00:00:04.787 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.793 [Pipeline] httpRequest 00:00:04.796 HttpMethod: GET 00:00:04.796 URL: http://10.211.164.101/packages/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:04.799 Sending request to url: http://10.211.164.101/packages/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:04.802 Response Code: HTTP/1.1 200 OK 00:00:04.802 Success: Status code 200 is in the accepted range: 200,404 00:00:04.802 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:05.311 [Pipeline] sh 00:00:05.593 + tar --no-same-owner -xf jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:05.611 [Pipeline] httpRequest 00:00:05.616 HttpMethod: GET 00:00:05.616 URL: http://10.211.164.101/packages/spdk_b68ae4fb9294e2067b21e3ded559f637585386b4.tar.gz 00:00:05.617 Sending request to url: http://10.211.164.101/packages/spdk_b68ae4fb9294e2067b21e3ded559f637585386b4.tar.gz 00:00:05.619 Response Code: HTTP/1.1 200 OK 00:00:05.619 Success: Status code 200 is in the accepted range: 200,404 00:00:05.619 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_b68ae4fb9294e2067b21e3ded559f637585386b4.tar.gz 00:00:19.207 [Pipeline] sh 00:00:19.488 + tar --no-same-owner -xf spdk_b68ae4fb9294e2067b21e3ded559f637585386b4.tar.gz 00:00:22.037 [Pipeline] sh 00:00:22.352 + git -C spdk log --oneline -n5 00:00:22.352 b68ae4fb9 nvmf-tcp: Added queue depth tracing support 00:00:22.352 46d7b94f0 nvmf-rdma: Added queue depth tracing support 00:00:22.352 0127345c8 nvme-tcp: Added queue depth tracing support 00:00:22.352 887390405 nvme-pcie: Added queue depth tracing support 00:00:22.352 2a75dcc9a lib/bdev: Added queue depth tracing support 00:00:22.364 [Pipeline] sh 00:00:22.643 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/88/22688/3 00:00:23.579 From https://review.spdk.io/gerrit/spdk/dpdk 00:00:23.579 * branch refs/changes/88/22688/3 -> FETCH_HEAD 00:00:23.591 [Pipeline] sh 00:00:23.873 + git -C spdk/dpdk checkout FETCH_HEAD 00:00:24.441 Previous HEAD position was db99adb13f kernel/freebsd: fix module build on FreeBSD 14 00:00:24.441 HEAD is now at 04f9dc6803 meson/mlx5: Suppress -Wunused-value diagnostic 00:00:24.450 [Pipeline] } 00:00:24.466 [Pipeline] // stage 00:00:24.473 [Pipeline] stage 00:00:24.474 [Pipeline] { (Prepare) 00:00:24.485 [Pipeline] writeFile 00:00:24.499 [Pipeline] sh 00:00:24.781 + logger -p user.info -t JENKINS-CI 00:00:24.794 [Pipeline] sh 00:00:25.074 + logger -p user.info -t JENKINS-CI 00:00:25.086 [Pipeline] sh 00:00:25.367 + cat autorun-spdk.conf 00:00:25.367 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.367 SPDK_TEST_FUZZER_SHORT=1 00:00:25.367 SPDK_TEST_FUZZER=1 00:00:25.367 SPDK_RUN_UBSAN=1 00:00:25.377 RUN_NIGHTLY= 00:00:25.399 [Pipeline] readFile 00:00:25.416 [Pipeline] withEnv 00:00:25.417 [Pipeline] { 00:00:25.425 [Pipeline] sh 00:00:25.702 + set -ex 00:00:25.702 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:25.702 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:25.702 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.702 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:25.702 ++ SPDK_TEST_FUZZER=1 00:00:25.702 ++ SPDK_RUN_UBSAN=1 00:00:25.702 ++ RUN_NIGHTLY= 00:00:25.702 + case $SPDK_TEST_NVMF_NICS in 00:00:25.702 + DRIVERS= 00:00:25.702 + [[ -n '' ]] 00:00:25.702 + exit 0 00:00:25.711 [Pipeline] } 00:00:25.729 [Pipeline] // withEnv 00:00:25.734 [Pipeline] } 00:00:25.748 [Pipeline] // stage 00:00:25.755 [Pipeline] catchError 00:00:25.756 [Pipeline] { 00:00:25.767 [Pipeline] timeout 00:00:25.768 Timeout set to expire in 30 min 00:00:25.769 [Pipeline] { 00:00:25.780 [Pipeline] stage 00:00:25.782 [Pipeline] { (Tests) 00:00:25.796 [Pipeline] sh 00:00:26.077 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:26.077 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:26.077 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:26.077 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:26.077 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:26.077 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:26.077 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:26.077 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:26.077 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:26.077 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:26.077 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:26.077 + source /etc/os-release 00:00:26.077 ++ NAME='Fedora Linux' 00:00:26.077 ++ VERSION='38 (Cloud Edition)' 00:00:26.077 ++ ID=fedora 00:00:26.077 ++ VERSION_ID=38 00:00:26.077 ++ VERSION_CODENAME= 00:00:26.077 ++ PLATFORM_ID=platform:f38 00:00:26.077 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:26.077 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:26.077 ++ LOGO=fedora-logo-icon 00:00:26.077 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:26.077 ++ HOME_URL=https://fedoraproject.org/ 00:00:26.077 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:26.077 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:26.077 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:26.077 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:26.077 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:26.077 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:26.077 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:26.077 ++ SUPPORT_END=2024-05-14 00:00:26.077 ++ VARIANT='Cloud Edition' 00:00:26.077 ++ VARIANT_ID=cloud 00:00:26.077 + uname -a 00:00:26.077 Linux spdk-wfp-20 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:26.077 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:29.366 Hugepages 00:00:29.366 node hugesize free / total 00:00:29.366 node0 1048576kB 0 / 0 00:00:29.366 node0 2048kB 0 / 0 00:00:29.366 node1 1048576kB 0 / 0 00:00:29.366 node1 2048kB 0 / 0 00:00:29.366 00:00:29.366 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:29.366 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:29.366 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:29.366 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:29.366 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:29.366 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:29.366 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:29.366 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:29.366 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:29.366 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:29.366 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:29.366 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:29.366 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:29.366 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:29.366 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:29.366 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:29.366 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:29.366 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:29.366 + rm -f /tmp/spdk-ld-path 00:00:29.366 + source autorun-spdk.conf 00:00:29.366 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.366 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:29.366 ++ SPDK_TEST_FUZZER=1 00:00:29.366 ++ SPDK_RUN_UBSAN=1 00:00:29.366 ++ RUN_NIGHTLY= 00:00:29.366 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:29.366 + [[ -n '' ]] 00:00:29.366 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:29.366 + for M in /var/spdk/build-*-manifest.txt 00:00:29.367 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:29.367 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:29.367 + for M in /var/spdk/build-*-manifest.txt 00:00:29.367 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:29.367 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:29.367 ++ uname 00:00:29.367 + [[ Linux == \L\i\n\u\x ]] 00:00:29.367 + sudo dmesg -T 00:00:29.367 + sudo dmesg --clear 00:00:29.367 + dmesg_pid=3502642 00:00:29.367 + [[ Fedora Linux == FreeBSD ]] 00:00:29.367 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:29.367 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:29.367 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:29.367 + [[ -x /usr/src/fio-static/fio ]] 00:00:29.367 + export FIO_BIN=/usr/src/fio-static/fio 00:00:29.367 + FIO_BIN=/usr/src/fio-static/fio 00:00:29.367 + sudo dmesg -Tw 00:00:29.367 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:29.367 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:29.367 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:29.367 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:29.367 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:29.367 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:29.367 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:29.367 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:29.367 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:29.367 Test configuration: 00:00:29.367 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.367 SPDK_TEST_FUZZER_SHORT=1 00:00:29.367 SPDK_TEST_FUZZER=1 00:00:29.367 SPDK_RUN_UBSAN=1 00:00:29.367 RUN_NIGHTLY= 11:36:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:29.367 11:36:56 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:29.367 11:36:56 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:29.367 11:36:56 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:29.367 11:36:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:29.367 11:36:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:29.367 11:36:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:29.367 11:36:56 -- paths/export.sh@5 -- $ export PATH 00:00:29.367 11:36:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:29.367 11:36:56 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:29.367 11:36:56 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:29.367 11:36:56 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715679416.XXXXXX 00:00:29.626 11:36:56 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715679416.WMLtWY 00:00:29.626 11:36:56 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:29.626 11:36:56 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:29.626 11:36:56 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:29.626 11:36:56 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:29.626 11:36:56 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:29.626 11:36:56 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:29.626 11:36:56 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:29.626 11:36:56 -- common/autotest_common.sh@10 -- $ set +x 00:00:29.626 11:36:56 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:29.626 11:36:56 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:29.626 11:36:56 -- pm/common@17 -- $ local monitor 00:00:29.626 11:36:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:29.626 11:36:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:29.626 11:36:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:29.626 11:36:56 -- pm/common@21 -- $ date +%s 00:00:29.626 11:36:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:29.626 11:36:56 -- pm/common@21 -- $ date +%s 00:00:29.626 11:36:56 -- pm/common@21 -- $ date +%s 00:00:29.626 11:36:56 -- pm/common@25 -- $ sleep 1 00:00:29.626 11:36:56 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715679416 00:00:29.626 11:36:56 -- pm/common@21 -- $ date +%s 00:00:29.626 11:36:56 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715679416 00:00:29.627 11:36:56 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715679416 00:00:29.627 11:36:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715679416 00:00:29.627 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715679416_collect-cpu-temp.pm.log 00:00:29.627 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715679416_collect-vmstat.pm.log 00:00:29.627 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715679416_collect-cpu-load.pm.log 00:00:29.627 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715679416_collect-bmc-pm.bmc.pm.log 00:00:30.565 11:36:57 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:30.565 11:36:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:30.565 11:36:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:30.565 11:36:57 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:30.565 11:36:57 -- spdk/autobuild.sh@16 -- $ date -u 00:00:30.565 Tue May 14 09:36:57 AM UTC 2024 00:00:30.565 11:36:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:30.565 v24.05-pre-610-gb68ae4fb9 00:00:30.565 11:36:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:30.565 11:36:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:30.565 11:36:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:30.565 11:36:57 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:30.565 11:36:57 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:30.565 11:36:57 -- common/autotest_common.sh@10 -- $ set +x 00:00:30.565 ************************************ 00:00:30.565 START TEST ubsan 00:00:30.565 ************************************ 00:00:30.565 11:36:57 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:30.565 using ubsan 00:00:30.565 00:00:30.565 real 0m0.001s 00:00:30.565 user 0m0.000s 00:00:30.565 sys 0m0.000s 00:00:30.565 11:36:57 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:30.565 11:36:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:30.565 ************************************ 00:00:30.565 END TEST ubsan 00:00:30.565 ************************************ 00:00:30.565 11:36:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:30.565 11:36:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:30.565 11:36:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:30.565 11:36:57 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:30.565 11:36:57 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:30.565 11:36:57 -- common/autobuild_common.sh@425 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:30.565 11:36:57 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:00:30.565 11:36:57 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:30.565 11:36:57 -- common/autotest_common.sh@10 -- $ set +x 00:00:30.825 ************************************ 00:00:30.825 START TEST autobuild_llvm_precompile 00:00:30.825 ************************************ 00:00:30.825 11:36:57 autobuild_llvm_precompile -- common/autotest_common.sh@1121 -- $ _llvm_precompile 00:00:30.825 11:36:57 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:30.825 11:36:57 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:00:30.825 Target: x86_64-redhat-linux-gnu 00:00:30.825 Thread model: posix 00:00:30.825 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:30.825 11:36:57 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:00:30.825 11:36:57 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:00:30.825 11:36:57 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:00:30.825 11:36:57 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:00:30.825 11:36:57 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:00:30.825 11:36:57 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:30.825 11:36:57 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:30.825 11:36:57 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:00:30.825 11:36:57 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:00:30.825 11:36:57 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:31.085 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:31.085 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:31.344 Using 'verbs' RDMA provider 00:00:47.170 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:02.054 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:02.054 Creating mk/config.mk...done. 00:01:02.054 Creating mk/cc.flags.mk...done. 00:01:02.054 Type 'make' to build. 00:01:02.054 00:01:02.054 real 0m29.462s 00:01:02.054 user 0m12.621s 00:01:02.054 sys 0m16.165s 00:01:02.054 11:37:27 autobuild_llvm_precompile -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:02.054 11:37:27 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:02.054 ************************************ 00:01:02.054 END TEST autobuild_llvm_precompile 00:01:02.054 ************************************ 00:01:02.054 11:37:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:02.054 11:37:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:02.054 11:37:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:02.054 11:37:27 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:02.055 11:37:27 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:02.055 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:02.055 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:02.055 Using 'verbs' RDMA provider 00:01:14.305 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:26.516 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:26.516 Creating mk/config.mk...done. 00:01:26.516 Creating mk/cc.flags.mk...done. 00:01:26.516 Type 'make' to build. 00:01:26.516 11:37:52 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:26.516 11:37:52 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:26.516 11:37:52 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:26.516 11:37:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.516 ************************************ 00:01:26.516 START TEST make 00:01:26.516 ************************************ 00:01:26.516 11:37:52 make -- common/autotest_common.sh@1121 -- $ make -j112 00:01:26.516 make[1]: Nothing to be done for 'all'. 00:01:27.455 The Meson build system 00:01:27.455 Version: 1.3.1 00:01:27.455 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:27.455 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:27.455 Build type: native build 00:01:27.455 Project name: libvfio-user 00:01:27.455 Project version: 0.0.1 00:01:27.455 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:27.455 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:27.455 Host machine cpu family: x86_64 00:01:27.455 Host machine cpu: x86_64 00:01:27.455 Run-time dependency threads found: YES 00:01:27.455 Library dl found: YES 00:01:27.455 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:27.455 Run-time dependency json-c found: YES 0.17 00:01:27.455 Run-time dependency cmocka found: YES 1.1.7 00:01:27.455 Program pytest-3 found: NO 00:01:27.455 Program flake8 found: NO 00:01:27.455 Program misspell-fixer found: NO 00:01:27.455 Program restructuredtext-lint found: NO 00:01:27.455 Program valgrind found: YES (/usr/bin/valgrind) 00:01:27.455 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:27.455 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:27.455 Compiler for C supports arguments -Wwrite-strings: YES 00:01:27.455 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:27.455 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:27.455 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:27.455 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:27.455 Build targets in project: 8 00:01:27.455 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:27.455 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:27.455 00:01:27.455 libvfio-user 0.0.1 00:01:27.455 00:01:27.455 User defined options 00:01:27.455 buildtype : debug 00:01:27.455 default_library: static 00:01:27.455 libdir : /usr/local/lib 00:01:27.455 00:01:27.455 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:28.022 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:28.022 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:28.022 [2/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:28.022 [3/36] Compiling C object samples/null.p/null.c.o 00:01:28.022 [4/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:28.022 [5/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:28.022 [6/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:28.022 [7/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:28.022 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:28.022 [9/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:28.022 [10/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:28.022 [11/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:28.022 [12/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:28.022 [13/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:28.022 [14/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:28.022 [15/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:28.022 [16/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:28.022 [17/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:28.022 [18/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:28.022 [19/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:28.022 [20/36] Compiling C object samples/server.p/server.c.o 00:01:28.022 [21/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:28.022 [22/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:28.022 [23/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:28.022 [24/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:28.022 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:28.022 [26/36] Compiling C object samples/client.p/client.c.o 00:01:28.022 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:28.022 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:28.022 [29/36] Linking static target lib/libvfio-user.a 00:01:28.022 [30/36] Linking target samples/client 00:01:28.022 [31/36] Linking target test/unit_tests 00:01:28.022 [32/36] Linking target samples/gpio-pci-idio-16 00:01:28.022 [33/36] Linking target samples/lspci 00:01:28.022 [34/36] Linking target samples/server 00:01:28.022 [35/36] Linking target samples/shadow_ioeventfd_server 00:01:28.022 [36/36] Linking target samples/null 00:01:28.022 INFO: autodetecting backend as ninja 00:01:28.022 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:28.281 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:28.540 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:28.540 ninja: no work to do. 00:01:33.812 The Meson build system 00:01:33.812 Version: 1.3.1 00:01:33.812 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:33.812 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:33.812 Build type: native build 00:01:33.812 Program cat found: YES (/usr/bin/cat) 00:01:33.812 Project name: DPDK 00:01:33.812 Project version: 24.03.0 00:01:33.812 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:33.812 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:33.812 Host machine cpu family: x86_64 00:01:33.812 Host machine cpu: x86_64 00:01:33.812 Message: ## Building in Developer Mode ## 00:01:33.812 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:33.812 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:33.812 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:33.812 Program python3 found: YES (/usr/bin/python3) 00:01:33.812 Program cat found: YES (/usr/bin/cat) 00:01:33.812 Compiler for C supports arguments -march=native: YES 00:01:33.812 Checking for size of "void *" : 8 00:01:33.812 Checking for size of "void *" : 8 (cached) 00:01:33.812 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:33.812 Library m found: YES 00:01:33.812 Library numa found: YES 00:01:33.812 Has header "numaif.h" : YES 00:01:33.812 Library fdt found: NO 00:01:33.812 Library execinfo found: NO 00:01:33.812 Has header "execinfo.h" : YES 00:01:33.812 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:33.812 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:33.812 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:33.812 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:33.812 Run-time dependency openssl found: YES 3.0.9 00:01:33.812 Run-time dependency libpcap found: YES 1.10.4 00:01:33.812 Has header "pcap.h" with dependency libpcap: YES 00:01:33.812 Compiler for C supports arguments -Wcast-qual: YES 00:01:33.812 Compiler for C supports arguments -Wdeprecated: YES 00:01:33.812 Compiler for C supports arguments -Wformat: YES 00:01:33.812 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:33.812 Compiler for C supports arguments -Wformat-security: YES 00:01:33.812 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:33.812 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:33.812 Compiler for C supports arguments -Wnested-externs: YES 00:01:33.812 Compiler for C supports arguments -Wold-style-definition: YES 00:01:33.812 Compiler for C supports arguments -Wpointer-arith: YES 00:01:33.812 Compiler for C supports arguments -Wsign-compare: YES 00:01:33.812 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:33.812 Compiler for C supports arguments -Wundef: YES 00:01:33.812 Compiler for C supports arguments -Wwrite-strings: YES 00:01:33.812 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:33.812 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:33.812 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:33.812 Program objdump found: YES (/usr/bin/objdump) 00:01:33.812 Compiler for C supports arguments -mavx512f: YES 00:01:33.812 Checking if "AVX512 checking" compiles: YES 00:01:33.812 Fetching value of define "__SSE4_2__" : 1 00:01:33.812 Fetching value of define "__AES__" : 1 00:01:33.812 Fetching value of define "__AVX__" : 1 00:01:33.812 Fetching value of define "__AVX2__" : 1 00:01:33.812 Fetching value of define "__AVX512BW__" : 1 00:01:33.812 Fetching value of define "__AVX512CD__" : 1 00:01:33.812 Fetching value of define "__AVX512DQ__" : 1 00:01:33.812 Fetching value of define "__AVX512F__" : 1 00:01:33.812 Fetching value of define "__AVX512VL__" : 1 00:01:33.812 Fetching value of define "__PCLMUL__" : 1 00:01:33.812 Fetching value of define "__RDRND__" : 1 00:01:33.812 Fetching value of define "__RDSEED__" : 1 00:01:33.812 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:33.812 Fetching value of define "__znver1__" : (undefined) 00:01:33.812 Fetching value of define "__znver2__" : (undefined) 00:01:33.812 Fetching value of define "__znver3__" : (undefined) 00:01:33.812 Fetching value of define "__znver4__" : (undefined) 00:01:33.812 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:33.812 Message: lib/log: Defining dependency "log" 00:01:33.812 Message: lib/kvargs: Defining dependency "kvargs" 00:01:33.812 Message: lib/telemetry: Defining dependency "telemetry" 00:01:33.812 Checking for function "getentropy" : NO 00:01:33.812 Message: lib/eal: Defining dependency "eal" 00:01:33.812 Message: lib/ring: Defining dependency "ring" 00:01:33.812 Message: lib/rcu: Defining dependency "rcu" 00:01:33.812 Message: lib/mempool: Defining dependency "mempool" 00:01:33.812 Message: lib/mbuf: Defining dependency "mbuf" 00:01:33.812 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:33.812 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:33.812 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:33.812 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:33.812 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:33.812 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:33.812 Compiler for C supports arguments -mpclmul: YES 00:01:33.812 Compiler for C supports arguments -maes: YES 00:01:33.812 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:33.812 Compiler for C supports arguments -mavx512bw: YES 00:01:33.812 Compiler for C supports arguments -mavx512dq: YES 00:01:33.812 Compiler for C supports arguments -mavx512vl: YES 00:01:33.812 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:33.812 Compiler for C supports arguments -mavx2: YES 00:01:33.812 Compiler for C supports arguments -mavx: YES 00:01:33.812 Message: lib/net: Defining dependency "net" 00:01:33.812 Message: lib/meter: Defining dependency "meter" 00:01:33.812 Message: lib/ethdev: Defining dependency "ethdev" 00:01:33.812 Message: lib/pci: Defining dependency "pci" 00:01:33.812 Message: lib/cmdline: Defining dependency "cmdline" 00:01:33.812 Message: lib/hash: Defining dependency "hash" 00:01:33.812 Message: lib/timer: Defining dependency "timer" 00:01:33.812 Message: lib/compressdev: Defining dependency "compressdev" 00:01:33.812 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:33.812 Message: lib/dmadev: Defining dependency "dmadev" 00:01:33.812 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:33.812 Message: lib/power: Defining dependency "power" 00:01:33.812 Message: lib/reorder: Defining dependency "reorder" 00:01:33.812 Message: lib/security: Defining dependency "security" 00:01:33.812 lib/meson.build:163: WARNING: Cannot disable mandatory library "stack" 00:01:33.812 Message: lib/stack: Defining dependency "stack" 00:01:33.812 Has header "linux/userfaultfd.h" : YES 00:01:33.812 Has header "linux/vduse.h" : YES 00:01:33.812 Message: lib/vhost: Defining dependency "vhost" 00:01:33.812 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:33.812 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:33.812 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:33.812 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:33.812 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:33.812 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:33.812 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:33.812 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:33.812 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:33.812 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:33.813 Program doxygen found: YES (/usr/bin/doxygen) 00:01:33.813 Configuring doxy-api-html.conf using configuration 00:01:33.813 Configuring doxy-api-man.conf using configuration 00:01:33.813 Program mandb found: YES (/usr/bin/mandb) 00:01:33.813 Program sphinx-build found: NO 00:01:33.813 Configuring rte_build_config.h using configuration 00:01:33.813 Message: 00:01:33.813 ================= 00:01:33.813 Applications Enabled 00:01:33.813 ================= 00:01:33.813 00:01:33.813 apps: 00:01:33.813 00:01:33.813 00:01:33.813 Message: 00:01:33.813 ================= 00:01:33.813 Libraries Enabled 00:01:33.813 ================= 00:01:33.813 00:01:33.813 libs: 00:01:33.813 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:33.813 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:33.813 cryptodev, dmadev, power, reorder, security, stack, vhost, 00:01:33.813 00:01:33.813 Message: 00:01:33.813 =============== 00:01:33.813 Drivers Enabled 00:01:33.813 =============== 00:01:33.813 00:01:33.813 common: 00:01:33.813 00:01:33.813 bus: 00:01:33.813 pci, vdev, 00:01:33.813 mempool: 00:01:33.813 ring, 00:01:33.813 dma: 00:01:33.813 00:01:33.813 net: 00:01:33.813 00:01:33.813 crypto: 00:01:33.813 00:01:33.813 compress: 00:01:33.813 00:01:33.813 vdpa: 00:01:33.813 00:01:33.813 00:01:33.813 Message: 00:01:33.813 ================= 00:01:33.813 Content Skipped 00:01:33.813 ================= 00:01:33.813 00:01:33.813 apps: 00:01:33.813 dumpcap: explicitly disabled via build config 00:01:33.813 graph: explicitly disabled via build config 00:01:33.813 pdump: explicitly disabled via build config 00:01:33.813 proc-info: explicitly disabled via build config 00:01:33.813 test-acl: explicitly disabled via build config 00:01:33.813 test-bbdev: explicitly disabled via build config 00:01:33.813 test-cmdline: explicitly disabled via build config 00:01:33.813 test-compress-perf: explicitly disabled via build config 00:01:33.813 test-crypto-perf: explicitly disabled via build config 00:01:33.813 test-dma-perf: explicitly disabled via build config 00:01:33.813 test-eventdev: explicitly disabled via build config 00:01:33.813 test-fib: explicitly disabled via build config 00:01:33.813 test-flow-perf: explicitly disabled via build config 00:01:33.813 test-gpudev: explicitly disabled via build config 00:01:33.813 test-mldev: explicitly disabled via build config 00:01:33.813 test-pipeline: explicitly disabled via build config 00:01:33.813 test-pmd: explicitly disabled via build config 00:01:33.813 test-regex: explicitly disabled via build config 00:01:33.813 test-sad: explicitly disabled via build config 00:01:33.813 test-security-perf: explicitly disabled via build config 00:01:33.813 00:01:33.813 libs: 00:01:33.813 argparse: explicitly disabled via build config 00:01:33.813 metrics: explicitly disabled via build config 00:01:33.813 acl: explicitly disabled via build config 00:01:33.813 bbdev: explicitly disabled via build config 00:01:33.813 bitratestats: explicitly disabled via build config 00:01:33.813 bpf: explicitly disabled via build config 00:01:33.813 cfgfile: explicitly disabled via build config 00:01:33.813 distributor: explicitly disabled via build config 00:01:33.813 efd: explicitly disabled via build config 00:01:33.813 eventdev: explicitly disabled via build config 00:01:33.813 dispatcher: explicitly disabled via build config 00:01:33.813 gpudev: explicitly disabled via build config 00:01:33.813 gro: explicitly disabled via build config 00:01:33.813 gso: explicitly disabled via build config 00:01:33.813 ip_frag: explicitly disabled via build config 00:01:33.813 jobstats: explicitly disabled via build config 00:01:33.813 latencystats: explicitly disabled via build config 00:01:33.813 lpm: explicitly disabled via build config 00:01:33.813 member: explicitly disabled via build config 00:01:33.813 pcapng: explicitly disabled via build config 00:01:33.813 rawdev: explicitly disabled via build config 00:01:33.813 regexdev: explicitly disabled via build config 00:01:33.813 mldev: explicitly disabled via build config 00:01:33.813 rib: explicitly disabled via build config 00:01:33.813 sched: explicitly disabled via build config 00:01:33.813 ipsec: explicitly disabled via build config 00:01:33.813 pdcp: explicitly disabled via build config 00:01:33.813 fib: explicitly disabled via build config 00:01:33.813 port: explicitly disabled via build config 00:01:33.813 pdump: explicitly disabled via build config 00:01:33.813 table: explicitly disabled via build config 00:01:33.813 pipeline: explicitly disabled via build config 00:01:33.813 graph: explicitly disabled via build config 00:01:33.813 node: explicitly disabled via build config 00:01:33.813 00:01:33.813 drivers: 00:01:33.813 common/cpt: not in enabled drivers build config 00:01:33.813 common/dpaax: not in enabled drivers build config 00:01:33.813 common/iavf: not in enabled drivers build config 00:01:33.813 common/idpf: not in enabled drivers build config 00:01:33.813 common/ionic: not in enabled drivers build config 00:01:33.813 common/mvep: not in enabled drivers build config 00:01:33.813 common/octeontx: not in enabled drivers build config 00:01:33.813 bus/auxiliary: not in enabled drivers build config 00:01:33.813 bus/cdx: not in enabled drivers build config 00:01:33.813 bus/dpaa: not in enabled drivers build config 00:01:33.813 bus/fslmc: not in enabled drivers build config 00:01:33.813 bus/ifpga: not in enabled drivers build config 00:01:33.813 bus/platform: not in enabled drivers build config 00:01:33.813 bus/uacce: not in enabled drivers build config 00:01:33.813 bus/vmbus: not in enabled drivers build config 00:01:33.813 common/cnxk: not in enabled drivers build config 00:01:33.813 common/mlx5: not in enabled drivers build config 00:01:33.813 common/nfp: not in enabled drivers build config 00:01:33.813 common/nitrox: not in enabled drivers build config 00:01:33.813 common/qat: not in enabled drivers build config 00:01:33.813 common/sfc_efx: not in enabled drivers build config 00:01:33.813 mempool/bucket: not in enabled drivers build config 00:01:33.813 mempool/cnxk: not in enabled drivers build config 00:01:33.813 mempool/dpaa: not in enabled drivers build config 00:01:33.813 mempool/dpaa2: not in enabled drivers build config 00:01:33.813 mempool/octeontx: not in enabled drivers build config 00:01:33.813 mempool/stack: not in enabled drivers build config 00:01:33.813 dma/cnxk: not in enabled drivers build config 00:01:33.813 dma/dpaa: not in enabled drivers build config 00:01:33.813 dma/dpaa2: not in enabled drivers build config 00:01:33.813 dma/hisilicon: not in enabled drivers build config 00:01:33.813 dma/idxd: not in enabled drivers build config 00:01:33.813 dma/ioat: not in enabled drivers build config 00:01:33.813 dma/skeleton: not in enabled drivers build config 00:01:33.813 net/af_packet: not in enabled drivers build config 00:01:33.813 net/af_xdp: not in enabled drivers build config 00:01:33.813 net/ark: not in enabled drivers build config 00:01:33.813 net/atlantic: not in enabled drivers build config 00:01:33.813 net/avp: not in enabled drivers build config 00:01:33.813 net/axgbe: not in enabled drivers build config 00:01:33.813 net/bnx2x: not in enabled drivers build config 00:01:33.813 net/bnxt: not in enabled drivers build config 00:01:33.813 net/bonding: not in enabled drivers build config 00:01:33.813 net/cnxk: not in enabled drivers build config 00:01:33.813 net/cpfl: not in enabled drivers build config 00:01:33.813 net/cxgbe: not in enabled drivers build config 00:01:33.813 net/dpaa: not in enabled drivers build config 00:01:33.813 net/dpaa2: not in enabled drivers build config 00:01:33.813 net/e1000: not in enabled drivers build config 00:01:33.813 net/ena: not in enabled drivers build config 00:01:33.813 net/enetc: not in enabled drivers build config 00:01:33.813 net/enetfec: not in enabled drivers build config 00:01:33.813 net/enic: not in enabled drivers build config 00:01:33.813 net/failsafe: not in enabled drivers build config 00:01:33.813 net/fm10k: not in enabled drivers build config 00:01:33.813 net/gve: not in enabled drivers build config 00:01:33.813 net/hinic: not in enabled drivers build config 00:01:33.813 net/hns3: not in enabled drivers build config 00:01:33.813 net/i40e: not in enabled drivers build config 00:01:33.813 net/iavf: not in enabled drivers build config 00:01:33.813 net/ice: not in enabled drivers build config 00:01:33.813 net/idpf: not in enabled drivers build config 00:01:33.814 net/igc: not in enabled drivers build config 00:01:33.814 net/ionic: not in enabled drivers build config 00:01:33.814 net/ipn3ke: not in enabled drivers build config 00:01:33.814 net/ixgbe: not in enabled drivers build config 00:01:33.814 net/mana: not in enabled drivers build config 00:01:33.814 net/memif: not in enabled drivers build config 00:01:33.814 net/mlx4: not in enabled drivers build config 00:01:33.814 net/mlx5: not in enabled drivers build config 00:01:33.814 net/mvneta: not in enabled drivers build config 00:01:33.814 net/mvpp2: not in enabled drivers build config 00:01:33.814 net/netvsc: not in enabled drivers build config 00:01:33.814 net/nfb: not in enabled drivers build config 00:01:33.814 net/nfp: not in enabled drivers build config 00:01:33.814 net/ngbe: not in enabled drivers build config 00:01:33.814 net/null: not in enabled drivers build config 00:01:33.814 net/octeontx: not in enabled drivers build config 00:01:33.814 net/octeon_ep: not in enabled drivers build config 00:01:33.814 net/pcap: not in enabled drivers build config 00:01:33.814 net/pfe: not in enabled drivers build config 00:01:33.814 net/qede: not in enabled drivers build config 00:01:33.814 net/ring: not in enabled drivers build config 00:01:33.814 net/sfc: not in enabled drivers build config 00:01:33.814 net/softnic: not in enabled drivers build config 00:01:33.814 net/tap: not in enabled drivers build config 00:01:33.814 net/thunderx: not in enabled drivers build config 00:01:33.814 net/txgbe: not in enabled drivers build config 00:01:33.814 net/vdev_netvsc: not in enabled drivers build config 00:01:33.814 net/vhost: not in enabled drivers build config 00:01:33.814 net/virtio: not in enabled drivers build config 00:01:33.814 net/vmxnet3: not in enabled drivers build config 00:01:33.814 raw/*: missing internal dependency, "rawdev" 00:01:33.814 crypto/armv8: not in enabled drivers build config 00:01:33.814 crypto/bcmfs: not in enabled drivers build config 00:01:33.814 crypto/caam_jr: not in enabled drivers build config 00:01:33.814 crypto/ccp: not in enabled drivers build config 00:01:33.814 crypto/cnxk: not in enabled drivers build config 00:01:33.814 crypto/dpaa_sec: not in enabled drivers build config 00:01:33.814 crypto/dpaa2_sec: not in enabled drivers build config 00:01:33.814 crypto/ipsec_mb: not in enabled drivers build config 00:01:33.814 crypto/mlx5: not in enabled drivers build config 00:01:33.814 crypto/mvsam: not in enabled drivers build config 00:01:33.814 crypto/nitrox: not in enabled drivers build config 00:01:33.814 crypto/null: not in enabled drivers build config 00:01:33.814 crypto/octeontx: not in enabled drivers build config 00:01:33.814 crypto/openssl: not in enabled drivers build config 00:01:33.814 crypto/scheduler: not in enabled drivers build config 00:01:33.814 crypto/uadk: not in enabled drivers build config 00:01:33.814 crypto/virtio: not in enabled drivers build config 00:01:33.814 compress/isal: not in enabled drivers build config 00:01:33.814 compress/mlx5: not in enabled drivers build config 00:01:33.814 compress/nitrox: not in enabled drivers build config 00:01:33.814 compress/octeontx: not in enabled drivers build config 00:01:33.814 compress/zlib: not in enabled drivers build config 00:01:33.814 regex/*: missing internal dependency, "regexdev" 00:01:33.814 ml/*: missing internal dependency, "mldev" 00:01:33.814 vdpa/ifc: not in enabled drivers build config 00:01:33.814 vdpa/mlx5: not in enabled drivers build config 00:01:33.814 vdpa/nfp: not in enabled drivers build config 00:01:33.814 vdpa/sfc: not in enabled drivers build config 00:01:33.814 event/*: missing internal dependency, "eventdev" 00:01:33.814 baseband/*: missing internal dependency, "bbdev" 00:01:33.814 gpu/*: missing internal dependency, "gpudev" 00:01:33.814 00:01:33.814 00:01:34.072 Build targets in project: 88 00:01:34.072 00:01:34.072 DPDK 24.03.0 00:01:34.072 00:01:34.072 User defined options 00:01:34.072 buildtype : debug 00:01:34.072 default_library : static 00:01:34.072 libdir : lib 00:01:34.072 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:34.072 c_args : -fPIC -Werror 00:01:34.072 c_link_args : 00:01:34.072 cpu_instruction_set: native 00:01:34.072 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:34.072 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib,argparse 00:01:34.072 enable_docs : false 00:01:34.072 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:34.072 enable_kmods : false 00:01:34.072 tests : false 00:01:34.072 00:01:34.072 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:34.643 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:34.643 [1/274] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:34.643 [2/274] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:34.643 [3/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:34.643 [4/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:34.643 [5/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:34.643 [6/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:34.643 [7/274] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:34.643 [8/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:34.643 [9/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:34.643 [10/274] Linking static target lib/librte_kvargs.a 00:01:34.643 [11/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:34.643 [12/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:34.643 [13/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:34.643 [14/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:34.643 [15/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:34.643 [16/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:34.643 [17/274] Linking static target lib/librte_log.a 00:01:34.643 [18/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:34.643 [19/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:34.643 [20/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:34.643 [21/274] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:34.643 [22/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:34.643 [23/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:34.643 [24/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:34.643 [25/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:34.643 [26/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:34.643 [27/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:34.643 [28/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:34.643 [29/274] Linking static target lib/librte_pci.a 00:01:34.643 [30/274] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:34.643 [31/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:34.643 [32/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:34.907 [33/274] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:34.907 [34/274] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:34.907 [35/274] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:34.907 [36/274] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.165 [37/274] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.165 [38/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:35.165 [39/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:35.165 [40/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:35.165 [41/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:35.165 [42/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:35.165 [43/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:35.165 [44/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:35.165 [45/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:35.165 [46/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:35.165 [47/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:35.165 [48/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:35.165 [49/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:35.165 [50/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:35.165 [51/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:35.165 [52/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:35.165 [53/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:35.165 [54/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:35.165 [55/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:35.165 [56/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:35.165 [57/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:35.165 [58/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:35.165 [59/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:35.165 [60/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:35.165 [61/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:35.165 [62/274] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:35.165 [63/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:35.165 [64/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:35.165 [65/274] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:35.165 [66/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:35.165 [67/274] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:35.165 [68/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:35.165 [69/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:35.165 [70/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:35.165 [71/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:35.165 [72/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:35.165 [73/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:35.165 [74/274] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:35.165 [75/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:35.165 [76/274] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:35.165 [77/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:35.165 [78/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:35.165 [79/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:35.165 [80/274] Linking static target lib/librte_telemetry.a 00:01:35.165 [81/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:35.165 [82/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:35.165 [83/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:35.165 [84/274] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:35.165 [85/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:35.165 [86/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:35.165 [87/274] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:35.165 [88/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:35.165 [89/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:35.165 [90/274] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:35.165 [91/274] Linking static target lib/librte_meter.a 00:01:35.165 [92/274] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:35.165 [93/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:35.165 [94/274] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:35.165 [95/274] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:35.165 [96/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:35.165 [97/274] Linking static target lib/librte_ring.a 00:01:35.165 [98/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:35.165 [99/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:35.165 [100/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:35.165 [101/274] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:35.165 [102/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:35.165 [103/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:35.165 [104/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:35.165 [105/274] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:35.165 [106/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:35.165 [107/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:35.165 [108/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:35.165 [109/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:35.165 [110/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:35.165 [111/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:35.165 [112/274] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:35.165 [113/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:35.165 [114/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:35.165 [115/274] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:35.165 [116/274] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:35.165 [117/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:35.165 [118/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:35.165 [119/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:35.165 [120/274] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:35.165 [121/274] Linking static target lib/librte_timer.a 00:01:35.165 [122/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:35.165 [123/274] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:35.166 [124/274] Linking static target lib/librte_cmdline.a 00:01:35.166 [125/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:35.166 [126/274] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:35.166 [127/274] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.166 [128/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:35.166 [129/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:35.166 [130/274] Linking static target lib/librte_net.a 00:01:35.166 [131/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:35.166 [132/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:35.423 [133/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:35.423 [134/274] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:35.423 [135/274] Linking static target lib/librte_eal.a 00:01:35.423 [136/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:35.423 [137/274] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:35.423 [138/274] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:35.423 [139/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:35.423 [140/274] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:35.423 [141/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:35.423 [142/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:35.423 [143/274] Linking static target lib/librte_dmadev.a 00:01:35.423 [144/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:35.423 [145/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:35.423 [146/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:35.423 [147/274] Linking static target lib/librte_rcu.a 00:01:35.423 [148/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:35.423 [149/274] Linking static target lib/librte_mempool.a 00:01:35.423 [150/274] Linking target lib/librte_log.so.24.1 00:01:35.423 [151/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:35.423 [152/274] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:35.423 [153/274] Linking static target lib/librte_mbuf.a 00:01:35.423 [154/274] Linking static target lib/librte_compressdev.a 00:01:35.423 [155/274] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:35.423 [156/274] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:35.423 [157/274] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:35.423 [158/274] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:35.423 [159/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:35.423 [160/274] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:35.423 [161/274] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.423 [162/274] Linking static target lib/librte_hash.a 00:01:35.423 [163/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:35.423 [164/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:35.423 [165/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:35.423 [166/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:35.423 [167/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:35.423 [168/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:35.423 [169/274] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:35.423 [170/274] Linking target lib/librte_kvargs.so.24.1 00:01:35.423 [171/274] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:35.423 [172/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:35.682 [173/274] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:35.682 [174/274] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:35.682 [175/274] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:35.682 [176/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:35.682 [177/274] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.682 [178/274] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:35.682 [179/274] Linking static target lib/librte_power.a 00:01:35.682 [180/274] Linking static target lib/librte_reorder.a 00:01:35.682 [181/274] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:35.682 [182/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:35.682 [183/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:35.682 [184/274] Linking static target lib/librte_cryptodev.a 00:01:35.682 [185/274] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:35.682 [186/274] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:35.682 [187/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:35.682 [188/274] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:35.682 [189/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:35.682 [190/274] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:35.682 [191/274] Linking static target lib/librte_security.a 00:01:35.682 [192/274] Linking static target lib/librte_stack.a 00:01:35.682 [193/274] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.682 [194/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:35.682 [195/274] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:35.682 [196/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:35.682 [197/274] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:35.682 [198/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:35.682 [199/274] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:35.682 [200/274] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:35.682 [201/274] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.682 [202/274] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.682 [203/274] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:35.682 [204/274] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:35.682 [205/274] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:35.682 [206/274] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:35.682 [207/274] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.682 [208/274] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:35.682 [209/274] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:35.941 [210/274] Linking static target drivers/librte_bus_vdev.a 00:01:35.941 [211/274] Linking static target drivers/librte_mempool_ring.a 00:01:35.941 [212/274] Linking target lib/librte_telemetry.so.24.1 00:01:35.941 [213/274] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.941 [214/274] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:35.941 [215/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:35.941 [216/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:35.941 [217/274] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:35.941 [218/274] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:35.941 [219/274] Linking static target lib/librte_ethdev.a 00:01:35.941 [220/274] Linking static target drivers/librte_bus_pci.a 00:01:35.941 [221/274] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:35.941 [222/274] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.941 [223/274] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.199 [224/274] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.199 [225/274] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.200 [226/274] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.200 [227/274] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.458 [228/274] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.458 [229/274] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.458 [230/274] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:36.458 [231/274] Linking static target lib/librte_vhost.a 00:01:36.717 [232/274] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.717 [233/274] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.717 [234/274] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.093 [235/274] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.660 [236/274] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.780 [237/274] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.156 [238/274] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.156 [239/274] Linking target lib/librte_eal.so.24.1 00:01:48.156 [240/274] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:48.415 [241/274] Linking target lib/librte_stack.so.24.1 00:01:48.415 [242/274] Linking target lib/librte_pci.so.24.1 00:01:48.415 [243/274] Linking target lib/librte_timer.so.24.1 00:01:48.415 [244/274] Linking target lib/librte_meter.so.24.1 00:01:48.415 [245/274] Linking target drivers/librte_bus_vdev.so.24.1 00:01:48.415 [246/274] Linking target lib/librte_ring.so.24.1 00:01:48.415 [247/274] Linking target lib/librte_dmadev.so.24.1 00:01:48.415 [248/274] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:48.415 [249/274] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:48.415 [250/274] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:48.415 [251/274] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:48.415 [252/274] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:48.415 [253/274] Linking target drivers/librte_bus_pci.so.24.1 00:01:48.415 [254/274] Linking target lib/librte_rcu.so.24.1 00:01:48.415 [255/274] Linking target lib/librte_mempool.so.24.1 00:01:48.686 [256/274] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:48.686 [257/274] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:48.686 [258/274] Linking target drivers/librte_mempool_ring.so.24.1 00:01:48.686 [259/274] Linking target lib/librte_mbuf.so.24.1 00:01:48.949 [260/274] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:48.949 [261/274] Linking target lib/librte_compressdev.so.24.1 00:01:48.949 [262/274] Linking target lib/librte_reorder.so.24.1 00:01:48.949 [263/274] Linking target lib/librte_net.so.24.1 00:01:48.949 [264/274] Linking target lib/librte_cryptodev.so.24.1 00:01:49.207 [265/274] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:49.207 [266/274] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:49.207 [267/274] Linking target lib/librte_security.so.24.1 00:01:49.207 [268/274] Linking target lib/librte_cmdline.so.24.1 00:01:49.207 [269/274] Linking target lib/librte_hash.so.24.1 00:01:49.207 [270/274] Linking target lib/librte_ethdev.so.24.1 00:01:49.207 [271/274] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:49.466 [272/274] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:49.466 [273/274] Linking target lib/librte_power.so.24.1 00:01:49.466 [274/274] Linking target lib/librte_vhost.so.24.1 00:01:49.466 INFO: autodetecting backend as ninja 00:01:49.466 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:50.403 CC lib/log/log.o 00:01:50.403 CC lib/log/log_flags.o 00:01:50.403 CC lib/log/log_deprecated.o 00:01:50.403 CC lib/ut/ut.o 00:01:50.403 CC lib/ut_mock/mock.o 00:01:50.403 LIB libspdk_log.a 00:01:50.403 LIB libspdk_ut_mock.a 00:01:50.403 LIB libspdk_ut.a 00:01:50.661 CC lib/ioat/ioat.o 00:01:50.919 CXX lib/trace_parser/trace.o 00:01:50.919 CC lib/dma/dma.o 00:01:50.919 CC lib/util/base64.o 00:01:50.919 CC lib/util/bit_array.o 00:01:50.919 CC lib/util/cpuset.o 00:01:50.919 CC lib/util/crc16.o 00:01:50.919 CC lib/util/crc32.o 00:01:50.919 CC lib/util/crc32c.o 00:01:50.919 CC lib/util/dif.o 00:01:50.919 CC lib/util/crc32_ieee.o 00:01:50.919 CC lib/util/fd.o 00:01:50.919 CC lib/util/crc64.o 00:01:50.919 CC lib/util/hexlify.o 00:01:50.919 CC lib/util/file.o 00:01:50.919 CC lib/util/iov.o 00:01:50.919 CC lib/util/math.o 00:01:50.919 CC lib/util/pipe.o 00:01:50.919 CC lib/util/strerror_tls.o 00:01:50.919 CC lib/util/uuid.o 00:01:50.919 CC lib/util/string.o 00:01:50.919 CC lib/util/fd_group.o 00:01:50.919 CC lib/util/xor.o 00:01:50.919 CC lib/util/zipf.o 00:01:50.919 CC lib/vfio_user/host/vfio_user_pci.o 00:01:50.919 CC lib/vfio_user/host/vfio_user.o 00:01:50.919 LIB libspdk_dma.a 00:01:50.919 LIB libspdk_ioat.a 00:01:50.919 LIB libspdk_vfio_user.a 00:01:51.185 LIB libspdk_util.a 00:01:51.185 LIB libspdk_trace_parser.a 00:01:51.470 CC lib/rdma/common.o 00:01:51.470 CC lib/env_dpdk/pci.o 00:01:51.470 CC lib/rdma/rdma_verbs.o 00:01:51.470 CC lib/env_dpdk/memory.o 00:01:51.470 CC lib/env_dpdk/env.o 00:01:51.470 CC lib/env_dpdk/init.o 00:01:51.470 CC lib/env_dpdk/threads.o 00:01:51.470 CC lib/env_dpdk/pci_ioat.o 00:01:51.470 CC lib/env_dpdk/pci_virtio.o 00:01:51.470 CC lib/env_dpdk/pci_vmd.o 00:01:51.470 CC lib/env_dpdk/pci_idxd.o 00:01:51.470 CC lib/env_dpdk/sigbus_handler.o 00:01:51.470 CC lib/env_dpdk/pci_event.o 00:01:51.470 CC lib/env_dpdk/pci_dpdk.o 00:01:51.470 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:51.470 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:51.470 CC lib/vmd/vmd.o 00:01:51.470 CC lib/vmd/led.o 00:01:51.470 CC lib/conf/conf.o 00:01:51.470 CC lib/json/json_parse.o 00:01:51.470 CC lib/json/json_util.o 00:01:51.470 CC lib/json/json_write.o 00:01:51.470 CC lib/idxd/idxd.o 00:01:51.470 CC lib/idxd/idxd_user.o 00:01:51.750 LIB libspdk_conf.a 00:01:51.750 LIB libspdk_rdma.a 00:01:51.750 LIB libspdk_json.a 00:01:51.750 LIB libspdk_idxd.a 00:01:51.750 LIB libspdk_vmd.a 00:01:52.008 CC lib/jsonrpc/jsonrpc_server.o 00:01:52.008 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:52.008 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:52.008 CC lib/jsonrpc/jsonrpc_client.o 00:01:52.008 LIB libspdk_jsonrpc.a 00:01:52.267 LIB libspdk_env_dpdk.a 00:01:52.526 CC lib/rpc/rpc.o 00:01:52.526 LIB libspdk_rpc.a 00:01:52.785 CC lib/notify/notify_rpc.o 00:01:52.785 CC lib/notify/notify.o 00:01:52.785 CC lib/keyring/keyring.o 00:01:52.785 CC lib/keyring/keyring_rpc.o 00:01:52.785 CC lib/trace/trace.o 00:01:52.785 CC lib/trace/trace_flags.o 00:01:52.785 CC lib/trace/trace_rpc.o 00:01:53.044 LIB libspdk_notify.a 00:01:53.044 LIB libspdk_keyring.a 00:01:53.044 LIB libspdk_trace.a 00:01:53.303 CC lib/sock/sock.o 00:01:53.303 CC lib/sock/sock_rpc.o 00:01:53.303 CC lib/thread/thread.o 00:01:53.303 CC lib/thread/iobuf.o 00:01:53.562 LIB libspdk_sock.a 00:01:53.820 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:53.820 CC lib/nvme/nvme_fabric.o 00:01:53.820 CC lib/nvme/nvme_ctrlr.o 00:01:53.820 CC lib/nvme/nvme_ns_cmd.o 00:01:53.820 CC lib/nvme/nvme_ns.o 00:01:53.820 CC lib/nvme/nvme_pcie_common.o 00:01:53.820 CC lib/nvme/nvme_pcie.o 00:01:53.820 CC lib/nvme/nvme_qpair.o 00:01:53.820 CC lib/nvme/nvme.o 00:01:53.820 CC lib/nvme/nvme_quirks.o 00:01:53.820 CC lib/nvme/nvme_transport.o 00:01:53.820 CC lib/nvme/nvme_discovery.o 00:01:53.820 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:53.820 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:53.820 CC lib/nvme/nvme_io_msg.o 00:01:53.820 CC lib/nvme/nvme_tcp.o 00:01:53.820 CC lib/nvme/nvme_opal.o 00:01:53.820 CC lib/nvme/nvme_poll_group.o 00:01:53.820 CC lib/nvme/nvme_zns.o 00:01:53.820 CC lib/nvme/nvme_stubs.o 00:01:53.820 CC lib/nvme/nvme_auth.o 00:01:53.820 CC lib/nvme/nvme_cuse.o 00:01:53.820 CC lib/nvme/nvme_vfio_user.o 00:01:53.820 CC lib/nvme/nvme_rdma.o 00:01:54.078 LIB libspdk_thread.a 00:01:54.337 CC lib/init/json_config.o 00:01:54.337 CC lib/init/subsystem.o 00:01:54.337 CC lib/init/subsystem_rpc.o 00:01:54.337 CC lib/init/rpc.o 00:01:54.337 CC lib/vfu_tgt/tgt_rpc.o 00:01:54.337 CC lib/vfu_tgt/tgt_endpoint.o 00:01:54.337 CC lib/blob/blobstore.o 00:01:54.337 CC lib/blob/request.o 00:01:54.337 CC lib/blob/zeroes.o 00:01:54.337 CC lib/blob/blob_bs_dev.o 00:01:54.337 CC lib/accel/accel_rpc.o 00:01:54.337 CC lib/accel/accel.o 00:01:54.337 CC lib/accel/accel_sw.o 00:01:54.595 CC lib/virtio/virtio.o 00:01:54.595 CC lib/virtio/virtio_vfio_user.o 00:01:54.595 CC lib/virtio/virtio_pci.o 00:01:54.595 CC lib/virtio/virtio_vhost_user.o 00:01:54.595 LIB libspdk_init.a 00:01:54.595 LIB libspdk_vfu_tgt.a 00:01:54.595 LIB libspdk_virtio.a 00:01:54.853 CC lib/event/reactor.o 00:01:54.853 CC lib/event/app.o 00:01:54.853 CC lib/event/log_rpc.o 00:01:54.853 CC lib/event/app_rpc.o 00:01:54.853 CC lib/event/scheduler_static.o 00:01:55.111 LIB libspdk_accel.a 00:01:55.111 LIB libspdk_event.a 00:01:55.111 LIB libspdk_nvme.a 00:01:55.370 CC lib/bdev/bdev.o 00:01:55.370 CC lib/bdev/bdev_rpc.o 00:01:55.370 CC lib/bdev/bdev_zone.o 00:01:55.370 CC lib/bdev/part.o 00:01:55.370 CC lib/bdev/scsi_nvme.o 00:01:56.316 LIB libspdk_blob.a 00:01:56.316 CC lib/blobfs/blobfs.o 00:01:56.316 CC lib/blobfs/tree.o 00:01:56.316 CC lib/lvol/lvol.o 00:01:56.883 LIB libspdk_lvol.a 00:01:56.883 LIB libspdk_blobfs.a 00:01:57.142 LIB libspdk_bdev.a 00:01:57.400 CC lib/ublk/ublk.o 00:01:57.400 CC lib/ublk/ublk_rpc.o 00:01:57.400 CC lib/scsi/dev.o 00:01:57.400 CC lib/scsi/lun.o 00:01:57.400 CC lib/scsi/scsi.o 00:01:57.400 CC lib/scsi/scsi_bdev.o 00:01:57.400 CC lib/scsi/port.o 00:01:57.400 CC lib/scsi/scsi_pr.o 00:01:57.400 CC lib/scsi/scsi_rpc.o 00:01:57.400 CC lib/scsi/task.o 00:01:57.400 CC lib/nbd/nbd.o 00:01:57.400 CC lib/nbd/nbd_rpc.o 00:01:57.400 CC lib/ftl/ftl_init.o 00:01:57.400 CC lib/ftl/ftl_core.o 00:01:57.400 CC lib/ftl/ftl_layout.o 00:01:57.400 CC lib/ftl/ftl_sb.o 00:01:57.400 CC lib/ftl/ftl_debug.o 00:01:57.400 CC lib/ftl/ftl_io.o 00:01:57.400 CC lib/nvmf/ctrlr.o 00:01:57.400 CC lib/ftl/ftl_l2p.o 00:01:57.400 CC lib/nvmf/ctrlr_discovery.o 00:01:57.400 CC lib/ftl/ftl_l2p_flat.o 00:01:57.400 CC lib/nvmf/ctrlr_bdev.o 00:01:57.400 CC lib/nvmf/nvmf.o 00:01:57.400 CC lib/ftl/ftl_nv_cache.o 00:01:57.400 CC lib/nvmf/subsystem.o 00:01:57.400 CC lib/ftl/ftl_band.o 00:01:57.400 CC lib/ftl/ftl_band_ops.o 00:01:57.400 CC lib/nvmf/nvmf_rpc.o 00:01:57.400 CC lib/ftl/ftl_writer.o 00:01:57.400 CC lib/nvmf/transport.o 00:01:57.400 CC lib/ftl/ftl_l2p_cache.o 00:01:57.400 CC lib/ftl/ftl_rq.o 00:01:57.400 CC lib/nvmf/tcp.o 00:01:57.400 CC lib/ftl/ftl_reloc.o 00:01:57.400 CC lib/nvmf/stubs.o 00:01:57.400 CC lib/nvmf/vfio_user.o 00:01:57.400 CC lib/ftl/ftl_p2l.o 00:01:57.400 CC lib/nvmf/rdma.o 00:01:57.400 CC lib/ftl/mngt/ftl_mngt.o 00:01:57.400 CC lib/nvmf/auth.o 00:01:57.400 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:57.400 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:57.400 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:57.400 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:57.400 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:57.400 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:57.400 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:57.400 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:57.400 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:57.400 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:57.400 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:57.400 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:57.400 CC lib/ftl/utils/ftl_conf.o 00:01:57.400 CC lib/ftl/utils/ftl_md.o 00:01:57.400 CC lib/ftl/utils/ftl_mempool.o 00:01:57.400 CC lib/ftl/utils/ftl_bitmap.o 00:01:57.400 CC lib/ftl/utils/ftl_property.o 00:01:57.400 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:57.400 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:57.400 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:57.400 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:57.400 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:57.400 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:57.400 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:57.400 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:57.400 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:57.400 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:57.400 CC lib/ftl/base/ftl_base_bdev.o 00:01:57.400 CC lib/ftl/base/ftl_base_dev.o 00:01:57.400 CC lib/ftl/ftl_trace.o 00:01:57.659 LIB libspdk_scsi.a 00:01:57.919 LIB libspdk_nbd.a 00:01:57.919 LIB libspdk_ublk.a 00:01:58.177 LIB libspdk_ftl.a 00:01:58.177 CC lib/iscsi/init_grp.o 00:01:58.177 CC lib/iscsi/conn.o 00:01:58.177 CC lib/iscsi/iscsi.o 00:01:58.177 CC lib/iscsi/md5.o 00:01:58.177 CC lib/iscsi/param.o 00:01:58.177 CC lib/vhost/vhost.o 00:01:58.177 CC lib/vhost/vhost_scsi.o 00:01:58.177 CC lib/iscsi/portal_grp.o 00:01:58.177 CC lib/iscsi/tgt_node.o 00:01:58.177 CC lib/vhost/vhost_rpc.o 00:01:58.177 CC lib/iscsi/iscsi_subsystem.o 00:01:58.177 CC lib/vhost/vhost_blk.o 00:01:58.177 CC lib/iscsi/iscsi_rpc.o 00:01:58.177 CC lib/vhost/rte_vhost_user.o 00:01:58.177 CC lib/iscsi/task.o 00:01:58.745 LIB libspdk_nvmf.a 00:01:58.745 LIB libspdk_vhost.a 00:01:59.004 LIB libspdk_iscsi.a 00:01:59.262 CC module/env_dpdk/env_dpdk_rpc.o 00:01:59.262 CC module/vfu_device/vfu_virtio.o 00:01:59.262 CC module/vfu_device/vfu_virtio_blk.o 00:01:59.262 CC module/vfu_device/vfu_virtio_scsi.o 00:01:59.262 CC module/vfu_device/vfu_virtio_rpc.o 00:01:59.521 CC module/scheduler/gscheduler/gscheduler.o 00:01:59.521 LIB libspdk_env_dpdk_rpc.a 00:01:59.521 CC module/keyring/file/keyring.o 00:01:59.521 CC module/keyring/file/keyring_rpc.o 00:01:59.521 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:59.521 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:59.521 CC module/accel/iaa/accel_iaa.o 00:01:59.521 CC module/accel/iaa/accel_iaa_rpc.o 00:01:59.521 CC module/sock/posix/posix.o 00:01:59.521 CC module/accel/error/accel_error_rpc.o 00:01:59.521 CC module/accel/error/accel_error.o 00:01:59.521 CC module/accel/ioat/accel_ioat.o 00:01:59.521 CC module/accel/ioat/accel_ioat_rpc.o 00:01:59.521 CC module/blob/bdev/blob_bdev.o 00:01:59.521 CC module/accel/dsa/accel_dsa_rpc.o 00:01:59.521 CC module/accel/dsa/accel_dsa.o 00:01:59.521 LIB libspdk_scheduler_gscheduler.a 00:01:59.521 LIB libspdk_keyring_file.a 00:01:59.521 LIB libspdk_scheduler_dpdk_governor.a 00:01:59.521 LIB libspdk_scheduler_dynamic.a 00:01:59.521 LIB libspdk_accel_error.a 00:01:59.521 LIB libspdk_accel_iaa.a 00:01:59.521 LIB libspdk_accel_ioat.a 00:01:59.521 LIB libspdk_blob_bdev.a 00:01:59.522 LIB libspdk_accel_dsa.a 00:01:59.780 LIB libspdk_vfu_device.a 00:01:59.780 LIB libspdk_sock_posix.a 00:02:00.038 CC module/bdev/gpt/gpt.o 00:02:00.038 CC module/bdev/gpt/vbdev_gpt.o 00:02:00.038 CC module/bdev/malloc/bdev_malloc.o 00:02:00.038 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:00.038 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:00.038 CC module/bdev/delay/vbdev_delay.o 00:02:00.038 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:00.038 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:00.038 CC module/bdev/null/bdev_null.o 00:02:00.038 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:00.038 CC module/bdev/null/bdev_null_rpc.o 00:02:00.038 CC module/bdev/raid/bdev_raid.o 00:02:00.038 CC module/bdev/raid/bdev_raid_rpc.o 00:02:00.038 CC module/bdev/raid/bdev_raid_sb.o 00:02:00.038 CC module/bdev/lvol/vbdev_lvol.o 00:02:00.038 CC module/bdev/nvme/bdev_nvme.o 00:02:00.038 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:00.038 CC module/bdev/iscsi/bdev_iscsi.o 00:02:00.038 CC module/bdev/raid/raid0.o 00:02:00.038 CC module/bdev/raid/raid1.o 00:02:00.038 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:00.038 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:00.038 CC module/bdev/nvme/bdev_mdns_client.o 00:02:00.038 CC module/bdev/error/vbdev_error.o 00:02:00.038 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:00.038 CC module/bdev/raid/concat.o 00:02:00.038 CC module/bdev/nvme/vbdev_opal.o 00:02:00.038 CC module/bdev/nvme/nvme_rpc.o 00:02:00.038 CC module/blobfs/bdev/blobfs_bdev.o 00:02:00.038 CC module/bdev/ftl/bdev_ftl.o 00:02:00.038 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:00.038 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:00.038 CC module/bdev/error/vbdev_error_rpc.o 00:02:00.039 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:00.039 CC module/bdev/aio/bdev_aio.o 00:02:00.039 CC module/bdev/split/vbdev_split.o 00:02:00.039 CC module/bdev/split/vbdev_split_rpc.o 00:02:00.039 CC module/bdev/aio/bdev_aio_rpc.o 00:02:00.039 CC module/bdev/passthru/vbdev_passthru.o 00:02:00.039 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:00.039 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:00.039 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:00.297 LIB libspdk_blobfs_bdev.a 00:02:00.297 LIB libspdk_bdev_split.a 00:02:00.297 LIB libspdk_bdev_gpt.a 00:02:00.297 LIB libspdk_bdev_null.a 00:02:00.297 LIB libspdk_bdev_error.a 00:02:00.297 LIB libspdk_bdev_ftl.a 00:02:00.297 LIB libspdk_bdev_passthru.a 00:02:00.297 LIB libspdk_bdev_delay.a 00:02:00.297 LIB libspdk_bdev_aio.a 00:02:00.297 LIB libspdk_bdev_malloc.a 00:02:00.297 LIB libspdk_bdev_iscsi.a 00:02:00.297 LIB libspdk_bdev_zone_block.a 00:02:00.297 LIB libspdk_bdev_lvol.a 00:02:00.556 LIB libspdk_bdev_virtio.a 00:02:00.556 LIB libspdk_bdev_raid.a 00:02:01.493 LIB libspdk_bdev_nvme.a 00:02:01.752 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:01.752 CC module/event/subsystems/keyring/keyring.o 00:02:02.012 CC module/event/subsystems/scheduler/scheduler.o 00:02:02.012 CC module/event/subsystems/sock/sock.o 00:02:02.012 CC module/event/subsystems/iobuf/iobuf.o 00:02:02.012 CC module/event/subsystems/vmd/vmd.o 00:02:02.012 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:02.012 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:02.012 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:02.012 LIB libspdk_event_keyring.a 00:02:02.012 LIB libspdk_event_vhost_blk.a 00:02:02.012 LIB libspdk_event_scheduler.a 00:02:02.012 LIB libspdk_event_sock.a 00:02:02.012 LIB libspdk_event_vmd.a 00:02:02.012 LIB libspdk_event_vfu_tgt.a 00:02:02.012 LIB libspdk_event_iobuf.a 00:02:02.271 CC module/event/subsystems/accel/accel.o 00:02:02.529 LIB libspdk_event_accel.a 00:02:02.789 CC module/event/subsystems/bdev/bdev.o 00:02:02.789 LIB libspdk_event_bdev.a 00:02:03.357 CC module/event/subsystems/nbd/nbd.o 00:02:03.357 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:03.357 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:03.357 CC module/event/subsystems/scsi/scsi.o 00:02:03.357 CC module/event/subsystems/ublk/ublk.o 00:02:03.357 LIB libspdk_event_nbd.a 00:02:03.357 LIB libspdk_event_ublk.a 00:02:03.357 LIB libspdk_event_scsi.a 00:02:03.357 LIB libspdk_event_nvmf.a 00:02:03.616 CC module/event/subsystems/iscsi/iscsi.o 00:02:03.616 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:03.875 LIB libspdk_event_vhost_scsi.a 00:02:03.875 LIB libspdk_event_iscsi.a 00:02:04.142 CC app/trace_record/trace_record.o 00:02:04.142 CXX app/trace/trace.o 00:02:04.142 CC app/spdk_nvme_discover/discovery_aer.o 00:02:04.142 CC app/spdk_nvme_perf/perf.o 00:02:04.142 CC app/spdk_top/spdk_top.o 00:02:04.142 CC app/spdk_lspci/spdk_lspci.o 00:02:04.142 CC test/rpc_client/rpc_client_test.o 00:02:04.142 CC app/spdk_nvme_identify/identify.o 00:02:04.142 TEST_HEADER include/spdk/accel.h 00:02:04.142 TEST_HEADER include/spdk/accel_module.h 00:02:04.142 CC app/vhost/vhost.o 00:02:04.142 TEST_HEADER include/spdk/assert.h 00:02:04.142 TEST_HEADER include/spdk/barrier.h 00:02:04.142 CC app/nvmf_tgt/nvmf_main.o 00:02:04.142 TEST_HEADER include/spdk/base64.h 00:02:04.142 TEST_HEADER include/spdk/bdev.h 00:02:04.142 TEST_HEADER include/spdk/bdev_module.h 00:02:04.142 TEST_HEADER include/spdk/bdev_zone.h 00:02:04.142 TEST_HEADER include/spdk/bit_array.h 00:02:04.142 TEST_HEADER include/spdk/bit_pool.h 00:02:04.142 CC app/iscsi_tgt/iscsi_tgt.o 00:02:04.142 TEST_HEADER include/spdk/blob_bdev.h 00:02:04.142 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:04.142 TEST_HEADER include/spdk/blobfs.h 00:02:04.142 TEST_HEADER include/spdk/conf.h 00:02:04.142 TEST_HEADER include/spdk/blob.h 00:02:04.142 TEST_HEADER include/spdk/config.h 00:02:04.142 TEST_HEADER include/spdk/cpuset.h 00:02:04.142 TEST_HEADER include/spdk/crc32.h 00:02:04.142 TEST_HEADER include/spdk/crc16.h 00:02:04.142 TEST_HEADER include/spdk/crc64.h 00:02:04.142 TEST_HEADER include/spdk/dif.h 00:02:04.142 TEST_HEADER include/spdk/dma.h 00:02:04.142 TEST_HEADER include/spdk/endian.h 00:02:04.142 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:04.142 TEST_HEADER include/spdk/env_dpdk.h 00:02:04.142 TEST_HEADER include/spdk/env.h 00:02:04.142 TEST_HEADER include/spdk/event.h 00:02:04.142 TEST_HEADER include/spdk/fd_group.h 00:02:04.142 TEST_HEADER include/spdk/file.h 00:02:04.142 TEST_HEADER include/spdk/fd.h 00:02:04.142 TEST_HEADER include/spdk/ftl.h 00:02:04.142 CC app/spdk_tgt/spdk_tgt.o 00:02:04.142 TEST_HEADER include/spdk/gpt_spec.h 00:02:04.142 TEST_HEADER include/spdk/hexlify.h 00:02:04.142 TEST_HEADER include/spdk/histogram_data.h 00:02:04.142 TEST_HEADER include/spdk/idxd.h 00:02:04.142 TEST_HEADER include/spdk/idxd_spec.h 00:02:04.142 TEST_HEADER include/spdk/init.h 00:02:04.142 CC app/spdk_dd/spdk_dd.o 00:02:04.142 TEST_HEADER include/spdk/ioat.h 00:02:04.142 TEST_HEADER include/spdk/ioat_spec.h 00:02:04.142 TEST_HEADER include/spdk/iscsi_spec.h 00:02:04.142 TEST_HEADER include/spdk/json.h 00:02:04.142 TEST_HEADER include/spdk/jsonrpc.h 00:02:04.142 TEST_HEADER include/spdk/keyring.h 00:02:04.142 TEST_HEADER include/spdk/keyring_module.h 00:02:04.142 TEST_HEADER include/spdk/likely.h 00:02:04.142 TEST_HEADER include/spdk/log.h 00:02:04.142 TEST_HEADER include/spdk/lvol.h 00:02:04.142 TEST_HEADER include/spdk/memory.h 00:02:04.142 TEST_HEADER include/spdk/mmio.h 00:02:04.142 TEST_HEADER include/spdk/nbd.h 00:02:04.142 TEST_HEADER include/spdk/notify.h 00:02:04.142 TEST_HEADER include/spdk/nvme.h 00:02:04.142 TEST_HEADER include/spdk/nvme_intel.h 00:02:04.142 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:04.142 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:04.142 TEST_HEADER include/spdk/nvme_spec.h 00:02:04.142 TEST_HEADER include/spdk/nvme_zns.h 00:02:04.142 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:04.142 TEST_HEADER include/spdk/nvmf.h 00:02:04.142 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:04.142 TEST_HEADER include/spdk/nvmf_spec.h 00:02:04.142 TEST_HEADER include/spdk/opal.h 00:02:04.142 TEST_HEADER include/spdk/nvmf_transport.h 00:02:04.142 TEST_HEADER include/spdk/pci_ids.h 00:02:04.142 TEST_HEADER include/spdk/opal_spec.h 00:02:04.142 TEST_HEADER include/spdk/queue.h 00:02:04.142 TEST_HEADER include/spdk/pipe.h 00:02:04.143 TEST_HEADER include/spdk/reduce.h 00:02:04.143 TEST_HEADER include/spdk/rpc.h 00:02:04.143 TEST_HEADER include/spdk/scheduler.h 00:02:04.143 TEST_HEADER include/spdk/scsi.h 00:02:04.143 TEST_HEADER include/spdk/scsi_spec.h 00:02:04.143 TEST_HEADER include/spdk/string.h 00:02:04.143 TEST_HEADER include/spdk/sock.h 00:02:04.143 TEST_HEADER include/spdk/stdinc.h 00:02:04.143 TEST_HEADER include/spdk/thread.h 00:02:04.143 TEST_HEADER include/spdk/trace.h 00:02:04.143 CC examples/vmd/led/led.o 00:02:04.143 TEST_HEADER include/spdk/trace_parser.h 00:02:04.143 TEST_HEADER include/spdk/tree.h 00:02:04.143 TEST_HEADER include/spdk/ublk.h 00:02:04.143 CC examples/ioat/perf/perf.o 00:02:04.143 TEST_HEADER include/spdk/util.h 00:02:04.143 TEST_HEADER include/spdk/version.h 00:02:04.143 TEST_HEADER include/spdk/uuid.h 00:02:04.143 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:04.143 TEST_HEADER include/spdk/vhost.h 00:02:04.143 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:04.143 TEST_HEADER include/spdk/vmd.h 00:02:04.143 CC examples/sock/hello_world/hello_sock.o 00:02:04.143 TEST_HEADER include/spdk/xor.h 00:02:04.143 CC examples/ioat/verify/verify.o 00:02:04.143 TEST_HEADER include/spdk/zipf.h 00:02:04.143 CXX test/cpp_headers/accel.o 00:02:04.143 CC examples/idxd/perf/perf.o 00:02:04.143 CXX test/cpp_headers/accel_module.o 00:02:04.143 CC examples/nvme/arbitration/arbitration.o 00:02:04.143 CXX test/cpp_headers/assert.o 00:02:04.143 CXX test/cpp_headers/barrier.o 00:02:04.143 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:04.143 CC examples/nvme/hello_world/hello_world.o 00:02:04.143 CXX test/cpp_headers/base64.o 00:02:04.143 CXX test/cpp_headers/bdev.o 00:02:04.143 CC examples/vmd/lsvmd/lsvmd.o 00:02:04.143 CC examples/nvme/reconnect/reconnect.o 00:02:04.143 CC test/app/histogram_perf/histogram_perf.o 00:02:04.143 CXX test/cpp_headers/bdev_module.o 00:02:04.143 CXX test/cpp_headers/bdev_zone.o 00:02:04.143 CXX test/cpp_headers/bit_array.o 00:02:04.143 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:04.143 CXX test/cpp_headers/bit_pool.o 00:02:04.143 CXX test/cpp_headers/blob_bdev.o 00:02:04.143 CXX test/cpp_headers/blobfs_bdev.o 00:02:04.143 CXX test/cpp_headers/blobfs.o 00:02:04.143 CXX test/cpp_headers/blob.o 00:02:04.143 CXX test/cpp_headers/conf.o 00:02:04.143 CXX test/cpp_headers/config.o 00:02:04.143 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:04.143 CC examples/util/zipf/zipf.o 00:02:04.143 CC examples/nvme/abort/abort.o 00:02:04.143 CXX test/cpp_headers/crc16.o 00:02:04.143 CXX test/cpp_headers/crc32.o 00:02:04.143 CXX test/cpp_headers/cpuset.o 00:02:04.143 CC app/fio/nvme/fio_plugin.o 00:02:04.143 CXX test/cpp_headers/crc64.o 00:02:04.143 CC examples/accel/perf/accel_perf.o 00:02:04.143 CC test/app/jsoncat/jsoncat.o 00:02:04.143 CXX test/cpp_headers/dma.o 00:02:04.143 CC examples/nvme/hotplug/hotplug.o 00:02:04.143 CXX test/cpp_headers/dif.o 00:02:04.143 CXX test/cpp_headers/endian.o 00:02:04.143 CXX test/cpp_headers/env_dpdk.o 00:02:04.143 CC test/nvme/overhead/overhead.o 00:02:04.143 CXX test/cpp_headers/env.o 00:02:04.143 CXX test/cpp_headers/event.o 00:02:04.143 CC test/event/reactor/reactor.o 00:02:04.143 CXX test/cpp_headers/fd_group.o 00:02:04.143 CXX test/cpp_headers/fd.o 00:02:04.143 CC test/event/reactor_perf/reactor_perf.o 00:02:04.143 CXX test/cpp_headers/file.o 00:02:04.143 CXX test/cpp_headers/ftl.o 00:02:04.143 CC test/app/stub/stub.o 00:02:04.143 CC test/event/event_perf/event_perf.o 00:02:04.143 CXX test/cpp_headers/gpt_spec.o 00:02:04.143 CXX test/cpp_headers/hexlify.o 00:02:04.143 CC test/nvme/compliance/nvme_compliance.o 00:02:04.143 CC test/nvme/fused_ordering/fused_ordering.o 00:02:04.143 CC test/nvme/reset/reset.o 00:02:04.143 CC test/thread/lock/spdk_lock.o 00:02:04.143 CC test/nvme/e2edp/nvme_dp.o 00:02:04.143 CC test/nvme/err_injection/err_injection.o 00:02:04.143 CC test/nvme/boot_partition/boot_partition.o 00:02:04.143 CC test/nvme/cuse/cuse.o 00:02:04.408 CC test/env/pci/pci_ut.o 00:02:04.408 CC test/nvme/sgl/sgl.o 00:02:04.408 CC test/nvme/fdp/fdp.o 00:02:04.408 CC test/nvme/reserve/reserve.o 00:02:04.408 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:04.408 CC test/nvme/startup/startup.o 00:02:04.408 CC test/env/vtophys/vtophys.o 00:02:04.408 CC test/nvme/aer/aer.o 00:02:04.408 CC test/nvme/simple_copy/simple_copy.o 00:02:04.408 CC test/nvme/connect_stress/connect_stress.o 00:02:04.408 CC test/thread/poller_perf/poller_perf.o 00:02:04.408 CC examples/bdev/hello_world/hello_bdev.o 00:02:04.408 CC test/env/memory/memory_ut.o 00:02:04.408 LINK spdk_lspci 00:02:04.408 CC examples/bdev/bdevperf/bdevperf.o 00:02:04.408 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:04.408 CC examples/blob/cli/blobcli.o 00:02:04.408 CC examples/blob/hello_world/hello_blob.o 00:02:04.408 CC test/event/app_repeat/app_repeat.o 00:02:04.408 CC examples/nvmf/nvmf/nvmf.o 00:02:04.408 CC examples/thread/thread/thread_ex.o 00:02:04.408 CC test/event/scheduler/scheduler.o 00:02:04.408 CC app/fio/bdev/fio_plugin.o 00:02:04.408 CC test/dma/test_dma/test_dma.o 00:02:04.408 CC test/blobfs/mkfs/mkfs.o 00:02:04.408 CXX test/cpp_headers/histogram_data.o 00:02:04.408 CC test/accel/dif/dif.o 00:02:04.408 CC test/app/bdev_svc/bdev_svc.o 00:02:04.408 CC test/bdev/bdevio/bdevio.o 00:02:04.408 LINK spdk_nvme_discover 00:02:04.408 LINK rpc_client_test 00:02:04.409 LINK spdk_trace_record 00:02:04.409 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:04.409 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:04.409 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:04.409 CC test/env/mem_callbacks/mem_callbacks.o 00:02:04.409 CC test/lvol/esnap/esnap.o 00:02:04.409 LINK vhost 00:02:04.409 LINK nvmf_tgt 00:02:04.409 LINK led 00:02:04.409 LINK iscsi_tgt 00:02:04.409 LINK lsvmd 00:02:04.409 LINK interrupt_tgt 00:02:04.409 LINK histogram_perf 00:02:04.409 LINK jsoncat 00:02:04.409 LINK event_perf 00:02:04.409 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:04.409 LINK reactor 00:02:04.409 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:04.409 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:04.409 LINK reactor_perf 00:02:04.409 CXX test/cpp_headers/idxd.o 00:02:04.409 LINK vtophys 00:02:04.409 CXX test/cpp_headers/idxd_spec.o 00:02:04.409 CXX test/cpp_headers/init.o 00:02:04.409 LINK zipf 00:02:04.409 CXX test/cpp_headers/ioat.o 00:02:04.409 CXX test/cpp_headers/ioat_spec.o 00:02:04.409 LINK spdk_tgt 00:02:04.409 CXX test/cpp_headers/iscsi_spec.o 00:02:04.409 CXX test/cpp_headers/json.o 00:02:04.409 LINK pmr_persistence 00:02:04.409 CXX test/cpp_headers/jsonrpc.o 00:02:04.409 CXX test/cpp_headers/keyring.o 00:02:04.409 LINK poller_perf 00:02:04.409 CXX test/cpp_headers/keyring_module.o 00:02:04.409 CXX test/cpp_headers/likely.o 00:02:04.409 CXX test/cpp_headers/log.o 00:02:04.409 CXX test/cpp_headers/lvol.o 00:02:04.409 CXX test/cpp_headers/memory.o 00:02:04.409 CXX test/cpp_headers/mmio.o 00:02:04.409 LINK env_dpdk_post_init 00:02:04.409 CXX test/cpp_headers/nbd.o 00:02:04.409 CXX test/cpp_headers/notify.o 00:02:04.409 CXX test/cpp_headers/nvme.o 00:02:04.409 LINK ioat_perf 00:02:04.409 LINK verify 00:02:04.409 CXX test/cpp_headers/nvme_intel.o 00:02:04.409 CXX test/cpp_headers/nvme_ocssd.o 00:02:04.409 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:04.409 CXX test/cpp_headers/nvme_spec.o 00:02:04.409 LINK stub 00:02:04.409 CXX test/cpp_headers/nvme_zns.o 00:02:04.409 LINK app_repeat 00:02:04.409 CXX test/cpp_headers/nvmf_cmd.o 00:02:04.409 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:04.409 LINK startup 00:02:04.409 CXX test/cpp_headers/nvmf.o 00:02:04.409 fio_plugin.c:1559:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:04.409 struct spdk_nvme_fdp_ruhs ruhs; 00:02:04.409 ^ 00:02:04.409 CXX test/cpp_headers/nvmf_spec.o 00:02:04.409 CXX test/cpp_headers/nvmf_transport.o 00:02:04.678 LINK connect_stress 00:02:04.678 LINK err_injection 00:02:04.678 CXX test/cpp_headers/opal.o 00:02:04.678 LINK doorbell_aers 00:02:04.678 LINK cmb_copy 00:02:04.678 LINK hello_world 00:02:04.678 LINK boot_partition 00:02:04.678 CXX test/cpp_headers/opal_spec.o 00:02:04.678 LINK hello_sock 00:02:04.678 LINK fused_ordering 00:02:04.678 LINK reserve 00:02:04.678 CXX test/cpp_headers/pci_ids.o 00:02:04.678 CXX test/cpp_headers/pipe.o 00:02:04.678 LINK hotplug 00:02:04.678 LINK bdev_svc 00:02:04.678 LINK simple_copy 00:02:04.678 CXX test/cpp_headers/queue.o 00:02:04.678 LINK hello_blob 00:02:04.678 LINK mkfs 00:02:04.678 LINK hello_bdev 00:02:04.678 LINK thread 00:02:04.678 LINK reset 00:02:04.678 LINK scheduler 00:02:04.678 LINK nvme_dp 00:02:04.678 LINK overhead 00:02:04.678 CXX test/cpp_headers/reduce.o 00:02:04.678 LINK sgl 00:02:04.678 LINK aer 00:02:04.678 LINK spdk_trace 00:02:04.678 LINK fdp 00:02:04.678 LINK nvmf 00:02:04.678 LINK idxd_perf 00:02:04.678 CXX test/cpp_headers/rpc.o 00:02:04.678 CXX test/cpp_headers/scheduler.o 00:02:04.678 CXX test/cpp_headers/scsi.o 00:02:04.678 CXX test/cpp_headers/scsi_spec.o 00:02:04.678 CXX test/cpp_headers/sock.o 00:02:04.678 LINK arbitration 00:02:04.678 CXX test/cpp_headers/stdinc.o 00:02:04.678 CXX test/cpp_headers/string.o 00:02:04.678 CXX test/cpp_headers/thread.o 00:02:04.678 CXX test/cpp_headers/trace.o 00:02:04.678 CXX test/cpp_headers/trace_parser.o 00:02:04.678 CXX test/cpp_headers/tree.o 00:02:04.678 CXX test/cpp_headers/ublk.o 00:02:04.678 LINK reconnect 00:02:04.678 CXX test/cpp_headers/util.o 00:02:04.678 CXX test/cpp_headers/uuid.o 00:02:04.678 CXX test/cpp_headers/version.o 00:02:04.678 CXX test/cpp_headers/vfio_user_pci.o 00:02:04.678 CXX test/cpp_headers/vfio_user_spec.o 00:02:04.678 CXX test/cpp_headers/vhost.o 00:02:04.939 CXX test/cpp_headers/vmd.o 00:02:04.939 CXX test/cpp_headers/xor.o 00:02:04.939 CXX test/cpp_headers/zipf.o 00:02:04.939 LINK abort 00:02:04.939 LINK spdk_dd 00:02:04.939 LINK test_dma 00:02:04.939 LINK dif 00:02:04.939 LINK bdevio 00:02:04.939 LINK nvme_manage 00:02:04.939 LINK accel_perf 00:02:04.939 LINK llvm_vfio_fuzz 00:02:04.939 LINK pci_ut 00:02:04.939 LINK nvme_compliance 00:02:04.939 LINK blobcli 00:02:04.939 LINK nvme_fuzz 00:02:04.939 LINK spdk_bdev 00:02:04.939 LINK mem_callbacks 00:02:05.197 1 warning generated. 00:02:05.197 LINK vhost_fuzz 00:02:05.197 LINK spdk_nvme 00:02:05.197 LINK spdk_nvme_identify 00:02:05.197 LINK spdk_nvme_perf 00:02:05.197 LINK memory_ut 00:02:05.197 LINK bdevperf 00:02:05.458 LINK spdk_top 00:02:05.458 LINK llvm_nvme_fuzz 00:02:05.458 LINK cuse 00:02:06.023 LINK spdk_lock 00:02:06.023 LINK iscsi_fuzz 00:02:07.924 LINK esnap 00:02:08.183 00:02:08.183 real 0m42.672s 00:02:08.183 user 6m15.611s 00:02:08.183 sys 2m45.995s 00:02:08.183 11:38:35 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:08.183 11:38:35 make -- common/autotest_common.sh@10 -- $ set +x 00:02:08.183 ************************************ 00:02:08.183 END TEST make 00:02:08.183 ************************************ 00:02:08.442 11:38:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:08.442 11:38:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:08.442 11:38:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:08.442 11:38:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.442 11:38:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:08.442 11:38:35 -- pm/common@44 -- $ pid=3502680 00:02:08.442 11:38:35 -- pm/common@50 -- $ kill -TERM 3502680 00:02:08.442 11:38:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.442 11:38:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:08.442 11:38:35 -- pm/common@44 -- $ pid=3502682 00:02:08.442 11:38:35 -- pm/common@50 -- $ kill -TERM 3502682 00:02:08.442 11:38:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.442 11:38:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:08.442 11:38:35 -- pm/common@44 -- $ pid=3502684 00:02:08.442 11:38:35 -- pm/common@50 -- $ kill -TERM 3502684 00:02:08.442 11:38:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.442 11:38:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:08.442 11:38:35 -- pm/common@44 -- $ pid=3502719 00:02:08.442 11:38:35 -- pm/common@50 -- $ sudo -E kill -TERM 3502719 00:02:08.442 11:38:35 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:08.442 11:38:35 -- nvmf/common.sh@7 -- # uname -s 00:02:08.442 11:38:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:08.442 11:38:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:08.442 11:38:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:08.442 11:38:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:08.442 11:38:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:08.442 11:38:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:08.442 11:38:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:08.442 11:38:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:08.442 11:38:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:08.442 11:38:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:08.442 11:38:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:08.442 11:38:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:08.442 11:38:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:08.442 11:38:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:08.442 11:38:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:08.442 11:38:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:08.442 11:38:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:08.442 11:38:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:08.443 11:38:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.443 11:38:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.443 11:38:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.443 11:38:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.443 11:38:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.443 11:38:35 -- paths/export.sh@5 -- # export PATH 00:02:08.443 11:38:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.443 11:38:35 -- nvmf/common.sh@47 -- # : 0 00:02:08.443 11:38:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:08.443 11:38:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:08.443 11:38:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:08.443 11:38:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:08.443 11:38:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:08.443 11:38:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:08.443 11:38:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:08.443 11:38:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:08.443 11:38:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:08.443 11:38:35 -- spdk/autotest.sh@32 -- # uname -s 00:02:08.443 11:38:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:08.443 11:38:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:08.443 11:38:35 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:08.443 11:38:35 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:08.443 11:38:35 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:08.443 11:38:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:08.443 11:38:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:08.443 11:38:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:08.443 11:38:35 -- spdk/autotest.sh@48 -- # udevadm_pid=3564400 00:02:08.443 11:38:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:08.443 11:38:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:08.443 11:38:35 -- pm/common@17 -- # local monitor 00:02:08.443 11:38:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.443 11:38:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.443 11:38:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.443 11:38:35 -- pm/common@21 -- # date +%s 00:02:08.443 11:38:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.443 11:38:35 -- pm/common@21 -- # date +%s 00:02:08.443 11:38:35 -- pm/common@25 -- # sleep 1 00:02:08.443 11:38:35 -- pm/common@21 -- # date +%s 00:02:08.443 11:38:35 -- pm/common@21 -- # date +%s 00:02:08.443 11:38:35 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715679515 00:02:08.443 11:38:35 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715679515 00:02:08.443 11:38:35 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715679515 00:02:08.443 11:38:35 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715679515 00:02:08.701 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715679515_collect-vmstat.pm.log 00:02:08.701 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715679515_collect-cpu-load.pm.log 00:02:08.701 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715679515_collect-cpu-temp.pm.log 00:02:08.701 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715679515_collect-bmc-pm.bmc.pm.log 00:02:09.638 11:38:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:09.638 11:38:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:09.638 11:38:36 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:09.638 11:38:36 -- common/autotest_common.sh@10 -- # set +x 00:02:09.638 11:38:36 -- spdk/autotest.sh@59 -- # create_test_list 00:02:09.638 11:38:36 -- common/autotest_common.sh@744 -- # xtrace_disable 00:02:09.638 11:38:36 -- common/autotest_common.sh@10 -- # set +x 00:02:09.638 11:38:36 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:09.638 11:38:36 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:09.638 11:38:36 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:09.638 11:38:36 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:09.638 11:38:36 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:09.638 11:38:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:09.638 11:38:36 -- common/autotest_common.sh@1451 -- # uname 00:02:09.638 11:38:36 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:02:09.638 11:38:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:09.638 11:38:36 -- common/autotest_common.sh@1471 -- # uname 00:02:09.638 11:38:36 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:02:09.638 11:38:36 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:09.638 11:38:36 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:09.638 11:38:36 -- spdk/autotest.sh@72 -- # hash lcov 00:02:09.638 11:38:36 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:09.638 11:38:36 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:09.638 11:38:36 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:09.638 11:38:36 -- common/autotest_common.sh@10 -- # set +x 00:02:09.638 11:38:36 -- spdk/autotest.sh@91 -- # rm -f 00:02:09.638 11:38:36 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:12.926 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:12.926 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:12.926 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:12.926 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:13.185 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:13.185 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:13.185 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:13.185 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:13.185 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:13.185 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:13.185 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:13.185 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:13.185 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:13.444 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:13.444 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:13.444 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:13.444 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:13.444 11:38:40 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:13.444 11:38:40 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:13.444 11:38:40 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:13.444 11:38:40 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:13.444 11:38:40 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:13.444 11:38:40 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:13.445 11:38:40 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:13.445 11:38:40 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:13.445 11:38:40 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:13.445 11:38:40 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:13.445 11:38:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:13.445 11:38:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:13.445 11:38:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:13.445 11:38:40 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:13.445 11:38:40 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:13.445 No valid GPT data, bailing 00:02:13.445 11:38:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:13.445 11:38:40 -- scripts/common.sh@391 -- # pt= 00:02:13.445 11:38:40 -- scripts/common.sh@392 -- # return 1 00:02:13.445 11:38:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:13.445 1+0 records in 00:02:13.445 1+0 records out 00:02:13.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00248565 s, 422 MB/s 00:02:13.445 11:38:40 -- spdk/autotest.sh@118 -- # sync 00:02:13.445 11:38:40 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:13.445 11:38:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:13.445 11:38:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:20.063 11:38:46 -- spdk/autotest.sh@124 -- # uname -s 00:02:20.063 11:38:46 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:20.063 11:38:46 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:20.063 11:38:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:20.063 11:38:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:20.063 11:38:46 -- common/autotest_common.sh@10 -- # set +x 00:02:20.063 ************************************ 00:02:20.063 START TEST setup.sh 00:02:20.063 ************************************ 00:02:20.063 11:38:46 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:20.063 * Looking for test storage... 00:02:20.063 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:20.063 11:38:46 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:20.063 11:38:46 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:20.063 11:38:46 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:20.063 11:38:46 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:20.063 11:38:46 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:20.063 11:38:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:20.063 ************************************ 00:02:20.063 START TEST acl 00:02:20.063 ************************************ 00:02:20.063 11:38:46 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:20.063 * Looking for test storage... 00:02:20.063 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:20.063 11:38:46 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:20.063 11:38:46 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:20.063 11:38:46 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:20.063 11:38:46 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:20.063 11:38:46 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:20.063 11:38:46 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:20.063 11:38:46 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:20.063 11:38:46 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:20.063 11:38:46 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:20.063 11:38:46 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:20.063 11:38:46 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:20.063 11:38:46 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:20.063 11:38:46 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:20.063 11:38:46 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:20.063 11:38:46 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:20.063 11:38:46 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:24.254 11:38:50 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:24.254 11:38:50 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:24.254 11:38:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:24.254 11:38:50 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:24.254 11:38:50 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:24.254 11:38:50 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:26.787 Hugepages 00:02:26.787 node hugesize free / total 00:02:26.787 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:26.787 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:26.787 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 00:02:27.047 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:27.047 11:38:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:27.047 11:38:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:27.047 11:38:54 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:27.047 11:38:54 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:27.047 11:38:54 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:27.047 11:38:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.047 11:38:54 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:27.047 11:38:54 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:27.047 11:38:54 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:27.047 11:38:54 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:27.047 11:38:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:27.307 ************************************ 00:02:27.307 START TEST denied 00:02:27.307 ************************************ 00:02:27.307 11:38:54 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:27.307 11:38:54 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:27.307 11:38:54 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:27.307 11:38:54 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:27.307 11:38:54 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:27.307 11:38:54 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:31.499 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:31.499 11:38:57 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:31.499 11:38:57 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:31.499 11:38:57 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:31.499 11:38:57 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:31.499 11:38:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:31.499 11:38:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:31.499 11:38:57 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:31.499 11:38:57 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:31.499 11:38:57 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:31.499 11:38:57 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:35.691 00:02:35.691 real 0m7.819s 00:02:35.691 user 0m2.388s 00:02:35.691 sys 0m4.737s 00:02:35.691 11:39:01 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:35.691 11:39:01 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:35.691 ************************************ 00:02:35.691 END TEST denied 00:02:35.691 ************************************ 00:02:35.691 11:39:02 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:35.691 11:39:02 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:35.691 11:39:02 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:35.691 11:39:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:35.691 ************************************ 00:02:35.691 START TEST allowed 00:02:35.691 ************************************ 00:02:35.691 11:39:02 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:02:35.691 11:39:02 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:35.691 11:39:02 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:35.691 11:39:02 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:35.691 11:39:02 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:35.691 11:39:02 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:40.969 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:40.969 11:39:07 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:40.969 11:39:07 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:40.969 11:39:07 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:40.969 11:39:07 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:40.969 11:39:07 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.260 00:02:44.260 real 0m8.915s 00:02:44.260 user 0m2.487s 00:02:44.260 sys 0m4.969s 00:02:44.260 11:39:10 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:44.260 11:39:10 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:44.260 ************************************ 00:02:44.260 END TEST allowed 00:02:44.260 ************************************ 00:02:44.260 00:02:44.260 real 0m24.308s 00:02:44.260 user 0m7.550s 00:02:44.260 sys 0m14.874s 00:02:44.260 11:39:11 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:44.260 11:39:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:44.260 ************************************ 00:02:44.260 END TEST acl 00:02:44.260 ************************************ 00:02:44.260 11:39:11 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:44.260 11:39:11 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:44.260 11:39:11 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:44.260 11:39:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:44.260 ************************************ 00:02:44.260 START TEST hugepages 00:02:44.260 ************************************ 00:02:44.260 11:39:11 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:02:44.260 * Looking for test storage... 00:02:44.260 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 40990808 kB' 'MemAvailable: 42906616 kB' 'Buffers: 12440 kB' 'Cached: 10731032 kB' 'SwapCached: 21844 kB' 'Active: 7153524 kB' 'Inactive: 4185848 kB' 'Active(anon): 6640492 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578156 kB' 'Mapped: 187012 kB' 'Shmem: 8479836 kB' 'KReclaimable: 298620 kB' 'Slab: 906624 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 608004 kB' 'KernelStack: 21856 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 10436976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215572 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.260 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.261 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:44.262 11:39:11 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:44.262 11:39:11 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:44.262 11:39:11 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:44.262 11:39:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:44.262 ************************************ 00:02:44.262 START TEST default_setup 00:02:44.262 ************************************ 00:02:44.262 11:39:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:02:44.262 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:44.262 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:44.262 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:44.262 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:44.262 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:44.262 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:44.262 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:44.263 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:44.263 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:44.263 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:44.263 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:44.263 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:44.263 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:44.263 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:44.263 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:44.263 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:44.263 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:44.263 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:44.263 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:44.263 11:39:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:44.263 11:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.263 11:39:11 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:02:47.555 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:47.555 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:47.555 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:47.555 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:47.555 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:47.555 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:47.814 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:47.814 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:47.814 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:47.814 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:47.814 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:47.814 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:47.814 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:47.814 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:47.814 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:47.814 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:49.725 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43228588 kB' 'MemAvailable: 45144396 kB' 'Buffers: 12440 kB' 'Cached: 10731156 kB' 'SwapCached: 21844 kB' 'Active: 7167444 kB' 'Inactive: 4185848 kB' 'Active(anon): 6654412 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591200 kB' 'Mapped: 186436 kB' 'Shmem: 8479960 kB' 'KReclaimable: 298620 kB' 'Slab: 904956 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606336 kB' 'KernelStack: 22192 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10455620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215828 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.725 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:49.726 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43232796 kB' 'MemAvailable: 45148604 kB' 'Buffers: 12440 kB' 'Cached: 10731160 kB' 'SwapCached: 21844 kB' 'Active: 7168140 kB' 'Inactive: 4185848 kB' 'Active(anon): 6655108 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591936 kB' 'Mapped: 186456 kB' 'Shmem: 8479964 kB' 'KReclaimable: 298620 kB' 'Slab: 904860 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606240 kB' 'KernelStack: 22048 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10446512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215844 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.727 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.728 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43231304 kB' 'MemAvailable: 45147112 kB' 'Buffers: 12440 kB' 'Cached: 10731176 kB' 'SwapCached: 21844 kB' 'Active: 7166952 kB' 'Inactive: 4185848 kB' 'Active(anon): 6653920 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590636 kB' 'Mapped: 186340 kB' 'Shmem: 8479980 kB' 'KReclaimable: 298620 kB' 'Slab: 904876 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606256 kB' 'KernelStack: 21936 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10447928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215764 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.729 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:49.730 nr_hugepages=1024 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:49.730 resv_hugepages=0 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:49.730 surplus_hugepages=0 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:49.730 anon_hugepages=0 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:49.730 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43230348 kB' 'MemAvailable: 45146156 kB' 'Buffers: 12440 kB' 'Cached: 10731200 kB' 'SwapCached: 21844 kB' 'Active: 7166956 kB' 'Inactive: 4185848 kB' 'Active(anon): 6653924 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590584 kB' 'Mapped: 186304 kB' 'Shmem: 8480004 kB' 'KReclaimable: 298620 kB' 'Slab: 904840 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606220 kB' 'KernelStack: 22016 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10448088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215732 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.731 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.732 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21850080 kB' 'MemUsed: 10789060 kB' 'SwapCached: 19204 kB' 'Active: 4190836 kB' 'Inactive: 3198520 kB' 'Active(anon): 4085768 kB' 'Inactive(anon): 2428280 kB' 'Active(file): 105068 kB' 'Inactive(file): 770240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7001048 kB' 'Mapped: 119580 kB' 'AnonPages: 391528 kB' 'Shmem: 6106536 kB' 'KernelStack: 13144 kB' 'PageTables: 4780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192244 kB' 'Slab: 514372 kB' 'SReclaimable: 192244 kB' 'SUnreclaim: 322128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.733 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:49.734 node0=1024 expecting 1024 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:49.734 00:02:49.734 real 0m5.256s 00:02:49.734 user 0m1.340s 00:02:49.734 sys 0m2.411s 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:49.734 11:39:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:49.734 ************************************ 00:02:49.734 END TEST default_setup 00:02:49.734 ************************************ 00:02:49.734 11:39:16 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:49.734 11:39:16 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:49.734 11:39:16 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:49.734 11:39:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:49.734 ************************************ 00:02:49.734 START TEST per_node_1G_alloc 00:02:49.734 ************************************ 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.734 11:39:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:02:53.029 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:53.029 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:53.030 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:53.030 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:53.030 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:53.030 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:53.030 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:53.030 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:53.030 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:53.030 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:53.030 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:53.030 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:53.030 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:53.030 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:53.030 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:53.030 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:53.030 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43273232 kB' 'MemAvailable: 45189040 kB' 'Buffers: 12440 kB' 'Cached: 10731316 kB' 'SwapCached: 21844 kB' 'Active: 7168436 kB' 'Inactive: 4185848 kB' 'Active(anon): 6655404 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591848 kB' 'Mapped: 186344 kB' 'Shmem: 8480120 kB' 'KReclaimable: 298620 kB' 'Slab: 905324 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606704 kB' 'KernelStack: 22064 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10448712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216084 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.030 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.031 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43273076 kB' 'MemAvailable: 45188884 kB' 'Buffers: 12440 kB' 'Cached: 10731336 kB' 'SwapCached: 21844 kB' 'Active: 7168828 kB' 'Inactive: 4185848 kB' 'Active(anon): 6655796 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592280 kB' 'Mapped: 186320 kB' 'Shmem: 8480140 kB' 'KReclaimable: 298620 kB' 'Slab: 905356 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606736 kB' 'KernelStack: 22080 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10449100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216004 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.032 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.033 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43273096 kB' 'MemAvailable: 45188904 kB' 'Buffers: 12440 kB' 'Cached: 10731356 kB' 'SwapCached: 21844 kB' 'Active: 7169144 kB' 'Inactive: 4185848 kB' 'Active(anon): 6656112 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592608 kB' 'Mapped: 186320 kB' 'Shmem: 8480160 kB' 'KReclaimable: 298620 kB' 'Slab: 905356 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606736 kB' 'KernelStack: 22048 kB' 'PageTables: 9016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10447732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215956 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.034 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.035 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:53.036 nr_hugepages=1024 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:53.036 resv_hugepages=0 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:53.036 surplus_hugepages=0 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:53.036 anon_hugepages=0 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43282144 kB' 'MemAvailable: 45197952 kB' 'Buffers: 12440 kB' 'Cached: 10731376 kB' 'SwapCached: 21844 kB' 'Active: 7169132 kB' 'Inactive: 4185848 kB' 'Active(anon): 6656100 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592916 kB' 'Mapped: 186320 kB' 'Shmem: 8480180 kB' 'KReclaimable: 298620 kB' 'Slab: 905356 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606736 kB' 'KernelStack: 21984 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10446728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215908 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.036 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.037 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22948256 kB' 'MemUsed: 9690884 kB' 'SwapCached: 19204 kB' 'Active: 4193636 kB' 'Inactive: 3198520 kB' 'Active(anon): 4088568 kB' 'Inactive(anon): 2428280 kB' 'Active(file): 105068 kB' 'Inactive(file): 770240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7001188 kB' 'Mapped: 119596 kB' 'AnonPages: 394604 kB' 'Shmem: 6106676 kB' 'KernelStack: 13160 kB' 'PageTables: 4772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192244 kB' 'Slab: 514644 kB' 'SReclaimable: 192244 kB' 'SUnreclaim: 322400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.038 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.039 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 20337392 kB' 'MemUsed: 7318688 kB' 'SwapCached: 2640 kB' 'Active: 2977176 kB' 'Inactive: 987328 kB' 'Active(anon): 2569212 kB' 'Inactive(anon): 6964 kB' 'Active(file): 407964 kB' 'Inactive(file): 980364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3764516 kB' 'Mapped: 66720 kB' 'AnonPages: 200160 kB' 'Shmem: 2373548 kB' 'KernelStack: 8808 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106376 kB' 'Slab: 390848 kB' 'SReclaimable: 106376 kB' 'SUnreclaim: 284472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.040 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.041 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:53.300 node0=512 expecting 512 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:53.300 node1=512 expecting 512 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:53.300 00:02:53.300 real 0m3.451s 00:02:53.300 user 0m1.297s 00:02:53.300 sys 0m2.223s 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:53.300 11:39:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:53.300 ************************************ 00:02:53.300 END TEST per_node_1G_alloc 00:02:53.301 ************************************ 00:02:53.301 11:39:20 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:53.301 11:39:20 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:53.301 11:39:20 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:53.301 11:39:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:53.301 ************************************ 00:02:53.301 START TEST even_2G_alloc 00:02:53.301 ************************************ 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.301 11:39:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:02:56.637 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:56.637 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43340912 kB' 'MemAvailable: 45256720 kB' 'Buffers: 12440 kB' 'Cached: 10731500 kB' 'SwapCached: 21844 kB' 'Active: 7166432 kB' 'Inactive: 4185848 kB' 'Active(anon): 6653400 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589908 kB' 'Mapped: 185156 kB' 'Shmem: 8480304 kB' 'KReclaimable: 298620 kB' 'Slab: 905196 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606576 kB' 'KernelStack: 22000 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10439404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215892 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.637 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.638 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43343004 kB' 'MemAvailable: 45258812 kB' 'Buffers: 12440 kB' 'Cached: 10731500 kB' 'SwapCached: 21844 kB' 'Active: 7165908 kB' 'Inactive: 4185848 kB' 'Active(anon): 6652876 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589328 kB' 'Mapped: 185188 kB' 'Shmem: 8480304 kB' 'KReclaimable: 298620 kB' 'Slab: 905188 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606568 kB' 'KernelStack: 21936 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10439420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215860 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.639 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.640 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43343004 kB' 'MemAvailable: 45258812 kB' 'Buffers: 12440 kB' 'Cached: 10731500 kB' 'SwapCached: 21844 kB' 'Active: 7165948 kB' 'Inactive: 4185848 kB' 'Active(anon): 6652916 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589360 kB' 'Mapped: 185188 kB' 'Shmem: 8480304 kB' 'KReclaimable: 298620 kB' 'Slab: 905188 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606568 kB' 'KernelStack: 21952 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10439440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215860 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.641 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.905 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:56.906 nr_hugepages=1024 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.906 resv_hugepages=0 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.906 surplus_hugepages=0 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.906 anon_hugepages=0 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43334344 kB' 'MemAvailable: 45250152 kB' 'Buffers: 12440 kB' 'Cached: 10731540 kB' 'SwapCached: 21844 kB' 'Active: 7171264 kB' 'Inactive: 4185848 kB' 'Active(anon): 6658232 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594692 kB' 'Mapped: 185952 kB' 'Shmem: 8480344 kB' 'KReclaimable: 298620 kB' 'Slab: 905188 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606568 kB' 'KernelStack: 21904 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10445584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215864 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.906 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.907 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22977724 kB' 'MemUsed: 9661416 kB' 'SwapCached: 19204 kB' 'Active: 4188972 kB' 'Inactive: 3198520 kB' 'Active(anon): 4083904 kB' 'Inactive(anon): 2428280 kB' 'Active(file): 105068 kB' 'Inactive(file): 770240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7001200 kB' 'Mapped: 118464 kB' 'AnonPages: 389512 kB' 'Shmem: 6106688 kB' 'KernelStack: 13160 kB' 'PageTables: 4616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192244 kB' 'Slab: 514560 kB' 'SReclaimable: 192244 kB' 'SUnreclaim: 322316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.908 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.909 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 20362196 kB' 'MemUsed: 7293884 kB' 'SwapCached: 2640 kB' 'Active: 2977920 kB' 'Inactive: 987328 kB' 'Active(anon): 2569956 kB' 'Inactive(anon): 6964 kB' 'Active(file): 407964 kB' 'Inactive(file): 980364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3764668 kB' 'Mapped: 67224 kB' 'AnonPages: 200684 kB' 'Shmem: 2373700 kB' 'KernelStack: 8760 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106376 kB' 'Slab: 390628 kB' 'SReclaimable: 106376 kB' 'SUnreclaim: 284252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.910 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:56.911 node0=512 expecting 512 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:56.911 node1=512 expecting 512 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:56.911 00:02:56.911 real 0m3.631s 00:02:56.911 user 0m1.348s 00:02:56.911 sys 0m2.347s 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:56.911 11:39:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:56.911 ************************************ 00:02:56.911 END TEST even_2G_alloc 00:02:56.911 ************************************ 00:02:56.911 11:39:23 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:56.911 11:39:23 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:56.911 11:39:23 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:56.911 11:39:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:56.911 ************************************ 00:02:56.911 START TEST odd_alloc 00:02:56.911 ************************************ 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.911 11:39:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:00.201 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:00.201 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:00.201 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:00.201 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:00.201 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:00.201 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:00.201 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:00.201 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:00.201 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:00.463 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:00.463 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:00.463 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:00.463 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:00.463 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:00.463 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:00.463 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:00.463 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43367624 kB' 'MemAvailable: 45283432 kB' 'Buffers: 12440 kB' 'Cached: 10731660 kB' 'SwapCached: 21844 kB' 'Active: 7168892 kB' 'Inactive: 4185848 kB' 'Active(anon): 6655860 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591712 kB' 'Mapped: 185304 kB' 'Shmem: 8480464 kB' 'KReclaimable: 298620 kB' 'Slab: 904800 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606180 kB' 'KernelStack: 22160 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10442724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216020 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.463 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.464 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43368796 kB' 'MemAvailable: 45284604 kB' 'Buffers: 12440 kB' 'Cached: 10731664 kB' 'SwapCached: 21844 kB' 'Active: 7168496 kB' 'Inactive: 4185848 kB' 'Active(anon): 6655464 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591740 kB' 'Mapped: 185196 kB' 'Shmem: 8480468 kB' 'KReclaimable: 298620 kB' 'Slab: 904808 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606188 kB' 'KernelStack: 22032 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10442740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215940 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.465 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.466 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43369712 kB' 'MemAvailable: 45285520 kB' 'Buffers: 12440 kB' 'Cached: 10731680 kB' 'SwapCached: 21844 kB' 'Active: 7168692 kB' 'Inactive: 4185848 kB' 'Active(anon): 6655660 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591908 kB' 'Mapped: 185196 kB' 'Shmem: 8480484 kB' 'KReclaimable: 298620 kB' 'Slab: 904808 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606188 kB' 'KernelStack: 22144 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10442760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215924 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.467 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:00.468 nr_hugepages=1025 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:00.468 resv_hugepages=0 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:00.468 surplus_hugepages=0 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:00.468 anon_hugepages=0 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.468 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.469 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43371052 kB' 'MemAvailable: 45286860 kB' 'Buffers: 12440 kB' 'Cached: 10731700 kB' 'SwapCached: 21844 kB' 'Active: 7168340 kB' 'Inactive: 4185848 kB' 'Active(anon): 6655308 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590948 kB' 'Mapped: 185196 kB' 'Shmem: 8480504 kB' 'KReclaimable: 298620 kB' 'Slab: 904808 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606188 kB' 'KernelStack: 22112 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 10442780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215956 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:00.469 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.469 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.469 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.731 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.732 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22984408 kB' 'MemUsed: 9654732 kB' 'SwapCached: 19204 kB' 'Active: 4192716 kB' 'Inactive: 3198520 kB' 'Active(anon): 4087648 kB' 'Inactive(anon): 2428280 kB' 'Active(file): 105068 kB' 'Inactive(file): 770240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7001244 kB' 'Mapped: 118472 kB' 'AnonPages: 393208 kB' 'Shmem: 6106732 kB' 'KernelStack: 13432 kB' 'PageTables: 5452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192244 kB' 'Slab: 514208 kB' 'SReclaimable: 192244 kB' 'SUnreclaim: 321964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.733 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 20387168 kB' 'MemUsed: 7268912 kB' 'SwapCached: 2640 kB' 'Active: 2975852 kB' 'Inactive: 987328 kB' 'Active(anon): 2567888 kB' 'Inactive(anon): 6964 kB' 'Active(file): 407964 kB' 'Inactive(file): 980364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3764784 kB' 'Mapped: 66720 kB' 'AnonPages: 198464 kB' 'Shmem: 2373816 kB' 'KernelStack: 8776 kB' 'PageTables: 3640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106376 kB' 'Slab: 390600 kB' 'SReclaimable: 106376 kB' 'SUnreclaim: 284224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.734 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:00.735 node0=512 expecting 513 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:00.735 node1=513 expecting 512 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:00.735 00:03:00.735 real 0m3.709s 00:03:00.735 user 0m1.361s 00:03:00.735 sys 0m2.410s 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:00.735 11:39:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:00.735 ************************************ 00:03:00.735 END TEST odd_alloc 00:03:00.735 ************************************ 00:03:00.735 11:39:27 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:00.735 11:39:27 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:00.735 11:39:27 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:00.735 11:39:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:00.735 ************************************ 00:03:00.736 START TEST custom_alloc 00:03:00.736 ************************************ 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.736 11:39:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:04.025 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:04.025 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:04.025 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.290 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42402196 kB' 'MemAvailable: 44318004 kB' 'Buffers: 12440 kB' 'Cached: 10731836 kB' 'SwapCached: 21844 kB' 'Active: 7168412 kB' 'Inactive: 4185848 kB' 'Active(anon): 6655380 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591012 kB' 'Mapped: 185284 kB' 'Shmem: 8480640 kB' 'KReclaimable: 298620 kB' 'Slab: 904588 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 605968 kB' 'KernelStack: 21952 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10441460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215892 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.291 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42401764 kB' 'MemAvailable: 44317572 kB' 'Buffers: 12440 kB' 'Cached: 10731840 kB' 'SwapCached: 21844 kB' 'Active: 7168188 kB' 'Inactive: 4185848 kB' 'Active(anon): 6655156 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591272 kB' 'Mapped: 185208 kB' 'Shmem: 8480644 kB' 'KReclaimable: 298620 kB' 'Slab: 904560 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 605940 kB' 'KernelStack: 21952 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10441476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215876 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.292 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.293 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42401704 kB' 'MemAvailable: 44317512 kB' 'Buffers: 12440 kB' 'Cached: 10731840 kB' 'SwapCached: 21844 kB' 'Active: 7168224 kB' 'Inactive: 4185848 kB' 'Active(anon): 6655192 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591300 kB' 'Mapped: 185208 kB' 'Shmem: 8480644 kB' 'KReclaimable: 298620 kB' 'Slab: 904560 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 605940 kB' 'KernelStack: 21968 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10441496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215892 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.294 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.295 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:04.296 nr_hugepages=1536 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:04.296 resv_hugepages=0 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:04.296 surplus_hugepages=0 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:04.296 anon_hugepages=0 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42400948 kB' 'MemAvailable: 44316756 kB' 'Buffers: 12440 kB' 'Cached: 10731896 kB' 'SwapCached: 21844 kB' 'Active: 7167880 kB' 'Inactive: 4185848 kB' 'Active(anon): 6654848 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 590864 kB' 'Mapped: 185208 kB' 'Shmem: 8480700 kB' 'KReclaimable: 298620 kB' 'Slab: 904564 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 605944 kB' 'KernelStack: 21936 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 10441520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215908 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.296 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.297 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 23023656 kB' 'MemUsed: 9615484 kB' 'SwapCached: 19204 kB' 'Active: 4191632 kB' 'Inactive: 3198520 kB' 'Active(anon): 4086564 kB' 'Inactive(anon): 2428280 kB' 'Active(file): 105068 kB' 'Inactive(file): 770240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7001288 kB' 'Mapped: 118488 kB' 'AnonPages: 392036 kB' 'Shmem: 6106776 kB' 'KernelStack: 13176 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192244 kB' 'Slab: 513984 kB' 'SReclaimable: 192244 kB' 'SUnreclaim: 321740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.298 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 19377916 kB' 'MemUsed: 8278164 kB' 'SwapCached: 2640 kB' 'Active: 2976184 kB' 'Inactive: 987328 kB' 'Active(anon): 2568220 kB' 'Inactive(anon): 6964 kB' 'Active(file): 407964 kB' 'Inactive(file): 980364 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3764916 kB' 'Mapped: 66720 kB' 'AnonPages: 198784 kB' 'Shmem: 2373948 kB' 'KernelStack: 8760 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106376 kB' 'Slab: 390580 kB' 'SReclaimable: 106376 kB' 'SUnreclaim: 284204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.299 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:04.300 node0=512 expecting 512 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.300 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:04.300 node1=1024 expecting 1024 00:03:04.301 11:39:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:04.301 00:03:04.301 real 0m3.579s 00:03:04.301 user 0m1.374s 00:03:04.301 sys 0m2.252s 00:03:04.301 11:39:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:04.301 11:39:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:04.301 ************************************ 00:03:04.301 END TEST custom_alloc 00:03:04.301 ************************************ 00:03:04.301 11:39:31 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:04.301 11:39:31 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:04.301 11:39:31 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:04.301 11:39:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:04.560 ************************************ 00:03:04.560 START TEST no_shrink_alloc 00:03:04.560 ************************************ 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.560 11:39:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:07.851 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:07.851 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43470100 kB' 'MemAvailable: 45385908 kB' 'Buffers: 12440 kB' 'Cached: 10731996 kB' 'SwapCached: 21844 kB' 'Active: 7170488 kB' 'Inactive: 4185848 kB' 'Active(anon): 6657456 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592888 kB' 'Mapped: 185324 kB' 'Shmem: 8480800 kB' 'KReclaimable: 298620 kB' 'Slab: 904688 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606068 kB' 'KernelStack: 22176 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10444780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216132 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.851 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:07.852 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43469688 kB' 'MemAvailable: 45385496 kB' 'Buffers: 12440 kB' 'Cached: 10731996 kB' 'SwapCached: 21844 kB' 'Active: 7171104 kB' 'Inactive: 4185848 kB' 'Active(anon): 6658072 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593524 kB' 'Mapped: 185300 kB' 'Shmem: 8480800 kB' 'KReclaimable: 298620 kB' 'Slab: 904680 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606060 kB' 'KernelStack: 22320 kB' 'PageTables: 9300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10444800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216068 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.853 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.854 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43466692 kB' 'MemAvailable: 45382500 kB' 'Buffers: 12440 kB' 'Cached: 10732016 kB' 'SwapCached: 21844 kB' 'Active: 7169672 kB' 'Inactive: 4185848 kB' 'Active(anon): 6656640 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592512 kB' 'Mapped: 185224 kB' 'Shmem: 8480820 kB' 'KReclaimable: 298620 kB' 'Slab: 904668 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606048 kB' 'KernelStack: 22160 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10443312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216036 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.855 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.117 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:08.118 nr_hugepages=1024 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.118 resv_hugepages=0 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.118 surplus_hugepages=0 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.118 anon_hugepages=0 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.118 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43465432 kB' 'MemAvailable: 45381240 kB' 'Buffers: 12440 kB' 'Cached: 10732040 kB' 'SwapCached: 21844 kB' 'Active: 7170036 kB' 'Inactive: 4185848 kB' 'Active(anon): 6657004 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592852 kB' 'Mapped: 185728 kB' 'Shmem: 8480844 kB' 'KReclaimable: 298620 kB' 'Slab: 904668 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606048 kB' 'KernelStack: 22096 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10446332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216004 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.119 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.120 11:39:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:08.120 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:08.120 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.120 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.120 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.120 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.120 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.120 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21958928 kB' 'MemUsed: 10680212 kB' 'SwapCached: 19204 kB' 'Active: 4196464 kB' 'Inactive: 3198520 kB' 'Active(anon): 4091396 kB' 'Inactive(anon): 2428280 kB' 'Active(file): 105068 kB' 'Inactive(file): 770240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7001288 kB' 'Mapped: 118500 kB' 'AnonPages: 397372 kB' 'Shmem: 6106776 kB' 'KernelStack: 13352 kB' 'PageTables: 5044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192244 kB' 'Slab: 513984 kB' 'SReclaimable: 192244 kB' 'SUnreclaim: 321740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.121 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:08.122 node0=1024 expecting 1024 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.122 11:39:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:11.417 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.417 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:11.417 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43450816 kB' 'MemAvailable: 45366624 kB' 'Buffers: 12440 kB' 'Cached: 10732140 kB' 'SwapCached: 21844 kB' 'Active: 7171580 kB' 'Inactive: 4185848 kB' 'Active(anon): 6658548 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593920 kB' 'Mapped: 185312 kB' 'Shmem: 8480944 kB' 'KReclaimable: 298620 kB' 'Slab: 904676 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606056 kB' 'KernelStack: 22080 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10445576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215908 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.417 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.418 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43452464 kB' 'MemAvailable: 45368272 kB' 'Buffers: 12440 kB' 'Cached: 10732144 kB' 'SwapCached: 21844 kB' 'Active: 7170928 kB' 'Inactive: 4185848 kB' 'Active(anon): 6657896 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593600 kB' 'Mapped: 185232 kB' 'Shmem: 8480948 kB' 'KReclaimable: 298620 kB' 'Slab: 904644 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606024 kB' 'KernelStack: 22000 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10444084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215860 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.419 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.420 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43450812 kB' 'MemAvailable: 45366620 kB' 'Buffers: 12440 kB' 'Cached: 10732164 kB' 'SwapCached: 21844 kB' 'Active: 7170636 kB' 'Inactive: 4185848 kB' 'Active(anon): 6657604 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593312 kB' 'Mapped: 185224 kB' 'Shmem: 8480968 kB' 'KReclaimable: 298620 kB' 'Slab: 904644 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606024 kB' 'KernelStack: 22064 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10444108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215876 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.421 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.422 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:11.423 nr_hugepages=1024 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.423 resv_hugepages=0 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.423 surplus_hugepages=0 00:03:11.423 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.423 anon_hugepages=0 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43450704 kB' 'MemAvailable: 45366512 kB' 'Buffers: 12440 kB' 'Cached: 10732184 kB' 'SwapCached: 21844 kB' 'Active: 7170260 kB' 'Inactive: 4185848 kB' 'Active(anon): 6657228 kB' 'Inactive(anon): 2435244 kB' 'Active(file): 513032 kB' 'Inactive(file): 1750604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8285436 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 592872 kB' 'Mapped: 185232 kB' 'Shmem: 8480988 kB' 'KReclaimable: 298620 kB' 'Slab: 904644 kB' 'SReclaimable: 298620 kB' 'SUnreclaim: 606024 kB' 'KernelStack: 22032 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 10445636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215892 kB' 'VmallocChunk: 0 kB' 'Percpu: 84672 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3136884 kB' 'DirectMap2M: 52123648 kB' 'DirectMap1G: 13631488 kB' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.424 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.425 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21958816 kB' 'MemUsed: 10680324 kB' 'SwapCached: 19204 kB' 'Active: 4191764 kB' 'Inactive: 3198520 kB' 'Active(anon): 4086696 kB' 'Inactive(anon): 2428280 kB' 'Active(file): 105068 kB' 'Inactive(file): 770240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7001296 kB' 'Mapped: 118508 kB' 'AnonPages: 392116 kB' 'Shmem: 6106784 kB' 'KernelStack: 13224 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 192244 kB' 'Slab: 513916 kB' 'SReclaimable: 192244 kB' 'SUnreclaim: 321672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.426 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:11.427 node0=1024 expecting 1024 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:11.427 00:03:11.427 real 0m6.829s 00:03:11.427 user 0m2.491s 00:03:11.427 sys 0m4.412s 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:11.427 11:39:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:11.427 ************************************ 00:03:11.427 END TEST no_shrink_alloc 00:03:11.427 ************************************ 00:03:11.427 11:39:38 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:11.427 11:39:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:11.427 11:39:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:11.427 11:39:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.427 11:39:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.427 11:39:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.427 11:39:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.427 11:39:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:11.427 11:39:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.427 11:39:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.427 11:39:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:11.427 11:39:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:11.427 11:39:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:11.427 11:39:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:11.427 00:03:11.427 real 0m27.186s 00:03:11.427 user 0m9.490s 00:03:11.427 sys 0m16.524s 00:03:11.427 11:39:38 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:11.427 11:39:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:11.427 ************************************ 00:03:11.427 END TEST hugepages 00:03:11.427 ************************************ 00:03:11.427 11:39:38 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:11.427 11:39:38 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:11.427 11:39:38 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:11.427 11:39:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:11.427 ************************************ 00:03:11.427 START TEST driver 00:03:11.427 ************************************ 00:03:11.427 11:39:38 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:11.427 * Looking for test storage... 00:03:11.427 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:11.427 11:39:38 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:11.427 11:39:38 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.427 11:39:38 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:16.701 11:39:43 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:16.701 11:39:43 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:16.701 11:39:43 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:16.701 11:39:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:16.701 ************************************ 00:03:16.701 START TEST guess_driver 00:03:16.701 ************************************ 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:16.701 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:16.701 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:16.701 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:16.701 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:16.701 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:16.701 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:16.701 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:16.701 Looking for driver=vfio-pci 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.701 11:39:43 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:19.990 11:39:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.365 11:39:48 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:21.365 11:39:48 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:21.365 11:39:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.365 11:39:48 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:21.365 11:39:48 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:21.365 11:39:48 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:21.365 11:39:48 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.635 00:03:26.635 real 0m9.758s 00:03:26.635 user 0m2.538s 00:03:26.635 sys 0m5.024s 00:03:26.635 11:39:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:26.635 11:39:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:26.635 ************************************ 00:03:26.635 END TEST guess_driver 00:03:26.635 ************************************ 00:03:26.635 00:03:26.635 real 0m14.698s 00:03:26.635 user 0m3.903s 00:03:26.635 sys 0m7.839s 00:03:26.635 11:39:53 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:26.635 11:39:53 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:26.635 ************************************ 00:03:26.635 END TEST driver 00:03:26.635 ************************************ 00:03:26.635 11:39:53 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:26.635 11:39:53 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:26.635 11:39:53 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:26.635 11:39:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:26.635 ************************************ 00:03:26.635 START TEST devices 00:03:26.635 ************************************ 00:03:26.635 11:39:53 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:26.635 * Looking for test storage... 00:03:26.635 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:26.635 11:39:53 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:26.635 11:39:53 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:26.635 11:39:53 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.635 11:39:53 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.951 11:39:56 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:29.951 11:39:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:29.951 11:39:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:29.951 11:39:56 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:29.951 11:39:56 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:29.951 11:39:56 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:29.952 11:39:56 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:29.952 11:39:56 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:29.952 11:39:56 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:29.952 11:39:56 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:29.952 11:39:56 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:29.952 11:39:56 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:29.952 11:39:56 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:29.952 11:39:56 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:29.952 11:39:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:29.952 11:39:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:29.952 11:39:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:29.952 11:39:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:29.952 11:39:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:29.952 11:39:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:29.952 11:39:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:29.952 11:39:56 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:29.952 No valid GPT data, bailing 00:03:29.952 11:39:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:30.249 11:39:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:30.249 11:39:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:30.249 11:39:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:30.249 11:39:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:30.249 11:39:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:30.249 11:39:57 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:30.249 11:39:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:30.249 11:39:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:30.249 11:39:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:30.249 11:39:57 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:30.249 11:39:57 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:30.249 11:39:57 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:30.249 11:39:57 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:30.249 11:39:57 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:30.249 11:39:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:30.249 ************************************ 00:03:30.249 START TEST nvme_mount 00:03:30.249 ************************************ 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:30.249 11:39:57 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:31.185 Creating new GPT entries in memory. 00:03:31.185 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:31.185 other utilities. 00:03:31.185 11:39:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:31.185 11:39:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:31.185 11:39:58 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:31.185 11:39:58 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:31.185 11:39:58 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:32.121 Creating new GPT entries in memory. 00:03:32.121 The operation has completed successfully. 00:03:32.121 11:39:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:32.121 11:39:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.121 11:39:59 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3595095 00:03:32.121 11:39:59 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.121 11:39:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:32.121 11:39:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.121 11:39:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:32.121 11:39:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:32.121 11:39:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.379 11:39:59 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:32.380 11:39:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:32.380 11:39:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:32.380 11:39:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.380 11:39:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:32.380 11:39:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:32.380 11:39:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:32.380 11:39:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:32.380 11:39:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:32.380 11:39:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.380 11:39:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:32.380 11:39:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:32.380 11:39:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.380 11:39:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:35.668 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:35.668 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:35.668 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:35.668 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:35.668 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:35.668 11:40:02 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:35.927 11:40:02 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.927 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:35.927 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:35.927 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:35.927 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.927 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:35.927 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:35.927 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:35.927 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:35.927 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:35.927 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.927 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:35.927 11:40:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:35.927 11:40:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.927 11:40:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.216 11:40:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.749 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.750 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.750 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:41.750 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.010 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:42.010 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:42.010 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:42.010 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.010 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:42.010 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:42.010 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:42.010 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:42.010 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.010 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:42.010 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:42.010 11:40:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:42.010 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:42.010 00:03:42.010 real 0m11.925s 00:03:42.010 user 0m3.437s 00:03:42.010 sys 0m6.346s 00:03:42.010 11:40:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:42.010 11:40:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:42.010 ************************************ 00:03:42.010 END TEST nvme_mount 00:03:42.010 ************************************ 00:03:42.010 11:40:09 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:42.010 11:40:09 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:42.010 11:40:09 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.010 11:40:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:42.010 ************************************ 00:03:42.010 START TEST dm_mount 00:03:42.010 ************************************ 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:42.010 11:40:09 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:43.388 Creating new GPT entries in memory. 00:03:43.388 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:43.388 other utilities. 00:03:43.388 11:40:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:43.388 11:40:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:43.388 11:40:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:43.388 11:40:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:43.388 11:40:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:44.325 Creating new GPT entries in memory. 00:03:44.326 The operation has completed successfully. 00:03:44.326 11:40:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:44.326 11:40:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:44.326 11:40:11 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:44.326 11:40:11 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:44.326 11:40:11 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:45.262 The operation has completed successfully. 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3599457 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.262 11:40:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:48.547 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:48.548 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:48.548 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:48.548 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:48.548 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:48.548 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:48.548 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:48.548 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:48.548 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.548 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:48.548 11:40:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:48.548 11:40:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.548 11:40:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.833 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:51.834 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:51.834 00:03:51.834 real 0m9.751s 00:03:51.834 user 0m2.373s 00:03:51.834 sys 0m4.473s 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:51.834 11:40:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:51.834 ************************************ 00:03:51.834 END TEST dm_mount 00:03:51.834 ************************************ 00:03:51.834 11:40:18 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:51.834 11:40:18 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:51.834 11:40:18 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.834 11:40:18 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:51.834 11:40:18 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:51.834 11:40:18 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:51.834 11:40:18 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:52.093 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:52.093 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:52.093 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:52.093 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:52.093 11:40:19 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:52.093 11:40:19 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:03:52.093 11:40:19 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:52.093 11:40:19 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:52.093 11:40:19 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:52.093 11:40:19 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:52.093 11:40:19 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:52.093 00:03:52.093 real 0m26.027s 00:03:52.093 user 0m7.282s 00:03:52.093 sys 0m13.601s 00:03:52.093 11:40:19 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:52.093 11:40:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:52.093 ************************************ 00:03:52.093 END TEST devices 00:03:52.093 ************************************ 00:03:52.351 00:03:52.351 real 1m32.676s 00:03:52.351 user 0m28.386s 00:03:52.351 sys 0m53.150s 00:03:52.351 11:40:19 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:52.351 11:40:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.351 ************************************ 00:03:52.351 END TEST setup.sh 00:03:52.351 ************************************ 00:03:52.351 11:40:19 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:55.639 Hugepages 00:03:55.639 node hugesize free / total 00:03:55.639 node0 1048576kB 0 / 0 00:03:55.639 node0 2048kB 2048 / 2048 00:03:55.639 node1 1048576kB 0 / 0 00:03:55.639 node1 2048kB 0 / 0 00:03:55.639 00:03:55.639 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:55.639 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:55.639 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:55.639 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:55.639 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:55.639 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:55.639 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:55.639 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:55.639 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:55.639 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:55.639 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:55.639 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:55.639 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:55.639 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:55.639 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:55.639 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:55.639 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:55.639 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:55.639 11:40:22 -- spdk/autotest.sh@130 -- # uname -s 00:03:55.639 11:40:22 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:55.639 11:40:22 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:55.639 11:40:22 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:58.924 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:58.924 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:58.924 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:58.924 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:58.924 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:58.924 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:58.924 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:58.924 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:58.924 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:58.924 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:58.924 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:58.924 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:59.182 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:59.182 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:59.182 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:59.182 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:00.565 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:00.824 11:40:27 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:01.761 11:40:28 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:01.761 11:40:28 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:01.761 11:40:28 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:01.761 11:40:28 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:01.761 11:40:28 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:01.761 11:40:28 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:01.761 11:40:28 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.761 11:40:28 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:01.761 11:40:28 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:01.761 11:40:28 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:01.761 11:40:28 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:04:01.761 11:40:28 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.087 Waiting for block devices as requested 00:04:05.087 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:05.087 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:05.346 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:05.346 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:05.346 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:05.346 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:05.604 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:05.604 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:05.604 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:05.862 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:05.862 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:05.862 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:06.121 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:06.121 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:06.121 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:06.379 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:06.379 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:06.637 11:40:33 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:06.637 11:40:33 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:06.637 11:40:33 -- common/autotest_common.sh@1498 -- # grep 0000:d8:00.0/nvme/nvme 00:04:06.637 11:40:33 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:04:06.637 11:40:33 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:06.637 11:40:33 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:06.637 11:40:33 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:06.637 11:40:33 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:06.637 11:40:33 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:06.637 11:40:33 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:06.637 11:40:33 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:06.637 11:40:33 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:06.637 11:40:33 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:06.637 11:40:33 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:04:06.637 11:40:33 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:06.637 11:40:33 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:06.637 11:40:33 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:06.637 11:40:33 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:06.637 11:40:33 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:06.637 11:40:33 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:06.637 11:40:33 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:06.637 11:40:33 -- common/autotest_common.sh@1553 -- # continue 00:04:06.637 11:40:33 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:06.637 11:40:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.637 11:40:33 -- common/autotest_common.sh@10 -- # set +x 00:04:06.637 11:40:33 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:06.637 11:40:33 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:06.637 11:40:33 -- common/autotest_common.sh@10 -- # set +x 00:04:06.637 11:40:33 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:09.924 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:09.924 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:09.924 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:09.924 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:09.924 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:09.924 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:09.924 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:09.924 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:09.924 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:09.924 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:09.924 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:09.924 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:09.924 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:09.924 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:09.924 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:09.924 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:11.303 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:11.562 11:40:38 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:11.562 11:40:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.562 11:40:38 -- common/autotest_common.sh@10 -- # set +x 00:04:11.562 11:40:38 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:11.562 11:40:38 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:11.562 11:40:38 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:11.562 11:40:38 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:11.562 11:40:38 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:11.562 11:40:38 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:11.562 11:40:38 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:11.562 11:40:38 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:11.562 11:40:38 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:11.562 11:40:38 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:11.562 11:40:38 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:11.562 11:40:38 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:11.562 11:40:38 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:04:11.562 11:40:38 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:11.562 11:40:38 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:11.562 11:40:38 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:04:11.562 11:40:38 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:11.562 11:40:38 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:04:11.562 11:40:38 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:d8:00.0 00:04:11.562 11:40:38 -- common/autotest_common.sh@1588 -- # [[ -z 0000:d8:00.0 ]] 00:04:11.562 11:40:38 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3609037 00:04:11.562 11:40:38 -- common/autotest_common.sh@1594 -- # waitforlisten 3609037 00:04:11.562 11:40:38 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.562 11:40:38 -- common/autotest_common.sh@827 -- # '[' -z 3609037 ']' 00:04:11.562 11:40:38 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.562 11:40:38 -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:11.562 11:40:38 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.562 11:40:38 -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:11.562 11:40:38 -- common/autotest_common.sh@10 -- # set +x 00:04:11.562 [2024-05-14 11:40:38.618118] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:04:11.562 [2024-05-14 11:40:38.618169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3609037 ] 00:04:11.562 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.820 [2024-05-14 11:40:38.685118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.820 [2024-05-14 11:40:38.764887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.387 11:40:39 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:12.387 11:40:39 -- common/autotest_common.sh@860 -- # return 0 00:04:12.387 11:40:39 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:04:12.387 11:40:39 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:04:12.387 11:40:39 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:15.671 nvme0n1 00:04:15.671 11:40:42 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:15.671 [2024-05-14 11:40:42.611409] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:15.671 request: 00:04:15.671 { 00:04:15.671 "nvme_ctrlr_name": "nvme0", 00:04:15.671 "password": "test", 00:04:15.671 "method": "bdev_nvme_opal_revert", 00:04:15.671 "req_id": 1 00:04:15.671 } 00:04:15.671 Got JSON-RPC error response 00:04:15.671 response: 00:04:15.671 { 00:04:15.671 "code": -32602, 00:04:15.671 "message": "Invalid parameters" 00:04:15.671 } 00:04:15.671 11:40:42 -- common/autotest_common.sh@1600 -- # true 00:04:15.671 11:40:42 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:04:15.671 11:40:42 -- common/autotest_common.sh@1604 -- # killprocess 3609037 00:04:15.671 11:40:42 -- common/autotest_common.sh@946 -- # '[' -z 3609037 ']' 00:04:15.671 11:40:42 -- common/autotest_common.sh@950 -- # kill -0 3609037 00:04:15.671 11:40:42 -- common/autotest_common.sh@951 -- # uname 00:04:15.671 11:40:42 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:15.671 11:40:42 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3609037 00:04:15.671 11:40:42 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:15.671 11:40:42 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:15.671 11:40:42 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3609037' 00:04:15.671 killing process with pid 3609037 00:04:15.671 11:40:42 -- common/autotest_common.sh@965 -- # kill 3609037 00:04:15.671 11:40:42 -- common/autotest_common.sh@970 -- # wait 3609037 00:04:18.204 11:40:44 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:18.204 11:40:44 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:18.204 11:40:44 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:18.204 11:40:44 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:18.204 11:40:44 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:18.204 11:40:44 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:18.204 11:40:44 -- common/autotest_common.sh@10 -- # set +x 00:04:18.204 11:40:44 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:18.204 11:40:44 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.204 11:40:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.204 11:40:44 -- common/autotest_common.sh@10 -- # set +x 00:04:18.204 ************************************ 00:04:18.204 START TEST env 00:04:18.204 ************************************ 00:04:18.204 11:40:44 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:18.204 * Looking for test storage... 00:04:18.204 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:18.204 11:40:44 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:18.204 11:40:44 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.204 11:40:44 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.204 11:40:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.204 ************************************ 00:04:18.204 START TEST env_memory 00:04:18.204 ************************************ 00:04:18.204 11:40:45 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:18.204 00:04:18.204 00:04:18.204 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.204 http://cunit.sourceforge.net/ 00:04:18.204 00:04:18.204 00:04:18.204 Suite: memory 00:04:18.204 Test: alloc and free memory map ...[2024-05-14 11:40:45.059462] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:18.204 passed 00:04:18.204 Test: mem map translation ...[2024-05-14 11:40:45.072698] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:18.204 [2024-05-14 11:40:45.072715] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:18.204 [2024-05-14 11:40:45.072747] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:18.204 [2024-05-14 11:40:45.072756] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:18.204 passed 00:04:18.204 Test: mem map registration ...[2024-05-14 11:40:45.094201] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:18.204 [2024-05-14 11:40:45.094221] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:18.204 passed 00:04:18.204 Test: mem map adjacent registrations ...passed 00:04:18.204 00:04:18.204 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.204 suites 1 1 n/a 0 0 00:04:18.204 tests 4 4 4 0 0 00:04:18.204 asserts 152 152 152 0 n/a 00:04:18.204 00:04:18.204 Elapsed time = 0.087 seconds 00:04:18.204 00:04:18.204 real 0m0.100s 00:04:18.204 user 0m0.088s 00:04:18.204 sys 0m0.012s 00:04:18.204 11:40:45 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:18.204 11:40:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:18.204 ************************************ 00:04:18.204 END TEST env_memory 00:04:18.204 ************************************ 00:04:18.204 11:40:45 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:18.204 11:40:45 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.204 11:40:45 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.204 11:40:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.204 ************************************ 00:04:18.204 START TEST env_vtophys 00:04:18.204 ************************************ 00:04:18.204 11:40:45 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:18.204 EAL: lib.eal log level changed from notice to debug 00:04:18.204 EAL: Detected lcore 0 as core 0 on socket 0 00:04:18.204 EAL: Detected lcore 1 as core 1 on socket 0 00:04:18.204 EAL: Detected lcore 2 as core 2 on socket 0 00:04:18.204 EAL: Detected lcore 3 as core 3 on socket 0 00:04:18.204 EAL: Detected lcore 4 as core 4 on socket 0 00:04:18.204 EAL: Detected lcore 5 as core 5 on socket 0 00:04:18.204 EAL: Detected lcore 6 as core 6 on socket 0 00:04:18.204 EAL: Detected lcore 7 as core 8 on socket 0 00:04:18.204 EAL: Detected lcore 8 as core 9 on socket 0 00:04:18.204 EAL: Detected lcore 9 as core 10 on socket 0 00:04:18.204 EAL: Detected lcore 10 as core 11 on socket 0 00:04:18.204 EAL: Detected lcore 11 as core 12 on socket 0 00:04:18.204 EAL: Detected lcore 12 as core 13 on socket 0 00:04:18.204 EAL: Detected lcore 13 as core 14 on socket 0 00:04:18.204 EAL: Detected lcore 14 as core 16 on socket 0 00:04:18.204 EAL: Detected lcore 15 as core 17 on socket 0 00:04:18.204 EAL: Detected lcore 16 as core 18 on socket 0 00:04:18.204 EAL: Detected lcore 17 as core 19 on socket 0 00:04:18.204 EAL: Detected lcore 18 as core 20 on socket 0 00:04:18.204 EAL: Detected lcore 19 as core 21 on socket 0 00:04:18.204 EAL: Detected lcore 20 as core 22 on socket 0 00:04:18.204 EAL: Detected lcore 21 as core 24 on socket 0 00:04:18.204 EAL: Detected lcore 22 as core 25 on socket 0 00:04:18.204 EAL: Detected lcore 23 as core 26 on socket 0 00:04:18.204 EAL: Detected lcore 24 as core 27 on socket 0 00:04:18.204 EAL: Detected lcore 25 as core 28 on socket 0 00:04:18.204 EAL: Detected lcore 26 as core 29 on socket 0 00:04:18.204 EAL: Detected lcore 27 as core 30 on socket 0 00:04:18.204 EAL: Detected lcore 28 as core 0 on socket 1 00:04:18.204 EAL: Detected lcore 29 as core 1 on socket 1 00:04:18.204 EAL: Detected lcore 30 as core 2 on socket 1 00:04:18.204 EAL: Detected lcore 31 as core 3 on socket 1 00:04:18.204 EAL: Detected lcore 32 as core 4 on socket 1 00:04:18.204 EAL: Detected lcore 33 as core 5 on socket 1 00:04:18.204 EAL: Detected lcore 34 as core 6 on socket 1 00:04:18.204 EAL: Detected lcore 35 as core 8 on socket 1 00:04:18.204 EAL: Detected lcore 36 as core 9 on socket 1 00:04:18.204 EAL: Detected lcore 37 as core 10 on socket 1 00:04:18.204 EAL: Detected lcore 38 as core 11 on socket 1 00:04:18.204 EAL: Detected lcore 39 as core 12 on socket 1 00:04:18.204 EAL: Detected lcore 40 as core 13 on socket 1 00:04:18.204 EAL: Detected lcore 41 as core 14 on socket 1 00:04:18.204 EAL: Detected lcore 42 as core 16 on socket 1 00:04:18.204 EAL: Detected lcore 43 as core 17 on socket 1 00:04:18.204 EAL: Detected lcore 44 as core 18 on socket 1 00:04:18.204 EAL: Detected lcore 45 as core 19 on socket 1 00:04:18.204 EAL: Detected lcore 46 as core 20 on socket 1 00:04:18.204 EAL: Detected lcore 47 as core 21 on socket 1 00:04:18.204 EAL: Detected lcore 48 as core 22 on socket 1 00:04:18.204 EAL: Detected lcore 49 as core 24 on socket 1 00:04:18.204 EAL: Detected lcore 50 as core 25 on socket 1 00:04:18.204 EAL: Detected lcore 51 as core 26 on socket 1 00:04:18.204 EAL: Detected lcore 52 as core 27 on socket 1 00:04:18.204 EAL: Detected lcore 53 as core 28 on socket 1 00:04:18.204 EAL: Detected lcore 54 as core 29 on socket 1 00:04:18.204 EAL: Detected lcore 55 as core 30 on socket 1 00:04:18.204 EAL: Detected lcore 56 as core 0 on socket 0 00:04:18.204 EAL: Detected lcore 57 as core 1 on socket 0 00:04:18.204 EAL: Detected lcore 58 as core 2 on socket 0 00:04:18.204 EAL: Detected lcore 59 as core 3 on socket 0 00:04:18.204 EAL: Detected lcore 60 as core 4 on socket 0 00:04:18.204 EAL: Detected lcore 61 as core 5 on socket 0 00:04:18.204 EAL: Detected lcore 62 as core 6 on socket 0 00:04:18.204 EAL: Detected lcore 63 as core 8 on socket 0 00:04:18.204 EAL: Detected lcore 64 as core 9 on socket 0 00:04:18.204 EAL: Detected lcore 65 as core 10 on socket 0 00:04:18.204 EAL: Detected lcore 66 as core 11 on socket 0 00:04:18.204 EAL: Detected lcore 67 as core 12 on socket 0 00:04:18.204 EAL: Detected lcore 68 as core 13 on socket 0 00:04:18.204 EAL: Detected lcore 69 as core 14 on socket 0 00:04:18.204 EAL: Detected lcore 70 as core 16 on socket 0 00:04:18.204 EAL: Detected lcore 71 as core 17 on socket 0 00:04:18.204 EAL: Detected lcore 72 as core 18 on socket 0 00:04:18.204 EAL: Detected lcore 73 as core 19 on socket 0 00:04:18.204 EAL: Detected lcore 74 as core 20 on socket 0 00:04:18.204 EAL: Detected lcore 75 as core 21 on socket 0 00:04:18.204 EAL: Detected lcore 76 as core 22 on socket 0 00:04:18.204 EAL: Detected lcore 77 as core 24 on socket 0 00:04:18.204 EAL: Detected lcore 78 as core 25 on socket 0 00:04:18.204 EAL: Detected lcore 79 as core 26 on socket 0 00:04:18.204 EAL: Detected lcore 80 as core 27 on socket 0 00:04:18.204 EAL: Detected lcore 81 as core 28 on socket 0 00:04:18.204 EAL: Detected lcore 82 as core 29 on socket 0 00:04:18.204 EAL: Detected lcore 83 as core 30 on socket 0 00:04:18.204 EAL: Detected lcore 84 as core 0 on socket 1 00:04:18.204 EAL: Detected lcore 85 as core 1 on socket 1 00:04:18.204 EAL: Detected lcore 86 as core 2 on socket 1 00:04:18.204 EAL: Detected lcore 87 as core 3 on socket 1 00:04:18.204 EAL: Detected lcore 88 as core 4 on socket 1 00:04:18.204 EAL: Detected lcore 89 as core 5 on socket 1 00:04:18.204 EAL: Detected lcore 90 as core 6 on socket 1 00:04:18.204 EAL: Detected lcore 91 as core 8 on socket 1 00:04:18.204 EAL: Detected lcore 92 as core 9 on socket 1 00:04:18.204 EAL: Detected lcore 93 as core 10 on socket 1 00:04:18.204 EAL: Detected lcore 94 as core 11 on socket 1 00:04:18.204 EAL: Detected lcore 95 as core 12 on socket 1 00:04:18.204 EAL: Detected lcore 96 as core 13 on socket 1 00:04:18.204 EAL: Detected lcore 97 as core 14 on socket 1 00:04:18.204 EAL: Detected lcore 98 as core 16 on socket 1 00:04:18.204 EAL: Detected lcore 99 as core 17 on socket 1 00:04:18.204 EAL: Detected lcore 100 as core 18 on socket 1 00:04:18.204 EAL: Detected lcore 101 as core 19 on socket 1 00:04:18.205 EAL: Detected lcore 102 as core 20 on socket 1 00:04:18.205 EAL: Detected lcore 103 as core 21 on socket 1 00:04:18.205 EAL: Detected lcore 104 as core 22 on socket 1 00:04:18.205 EAL: Detected lcore 105 as core 24 on socket 1 00:04:18.205 EAL: Detected lcore 106 as core 25 on socket 1 00:04:18.205 EAL: Detected lcore 107 as core 26 on socket 1 00:04:18.205 EAL: Detected lcore 108 as core 27 on socket 1 00:04:18.205 EAL: Detected lcore 109 as core 28 on socket 1 00:04:18.205 EAL: Detected lcore 110 as core 29 on socket 1 00:04:18.205 EAL: Detected lcore 111 as core 30 on socket 1 00:04:18.205 EAL: Maximum logical cores by configuration: 128 00:04:18.205 EAL: Detected CPU lcores: 112 00:04:18.205 EAL: Detected NUMA nodes: 2 00:04:18.205 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:18.205 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:18.205 EAL: Checking presence of .so 'librte_eal.so' 00:04:18.205 EAL: Detected static linkage of DPDK 00:04:18.205 EAL: No shared files mode enabled, IPC will be disabled 00:04:18.205 EAL: Bus pci wants IOVA as 'DC' 00:04:18.205 EAL: Buses did not request a specific IOVA mode. 00:04:18.205 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:18.205 EAL: Selected IOVA mode 'VA' 00:04:18.205 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.205 EAL: Probing VFIO support... 00:04:18.205 EAL: IOMMU type 1 (Type 1) is supported 00:04:18.205 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:18.205 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:18.205 EAL: VFIO support initialized 00:04:18.205 EAL: Ask a virtual area of 0x2e000 bytes 00:04:18.205 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:18.205 EAL: Setting up physically contiguous memory... 00:04:18.205 EAL: Setting maximum number of open files to 524288 00:04:18.205 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:18.205 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:18.205 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:18.205 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.205 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:18.205 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.205 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.205 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:18.205 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:18.205 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.205 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:18.205 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.205 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.205 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:18.205 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:18.205 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.205 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:18.205 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.205 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.205 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:18.205 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:18.205 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.205 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:18.205 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.205 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.205 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:18.205 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:18.205 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:18.205 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.205 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:18.205 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.205 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.205 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:18.205 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:18.205 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.205 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:18.205 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.205 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.205 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:18.205 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:18.205 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.205 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:18.205 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.205 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.205 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:18.205 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:18.205 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.205 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:18.205 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.205 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.205 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:18.205 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:18.205 EAL: Hugepages will be freed exactly as allocated. 00:04:18.205 EAL: No shared files mode enabled, IPC is disabled 00:04:18.205 EAL: No shared files mode enabled, IPC is disabled 00:04:18.205 EAL: TSC frequency is ~2500000 KHz 00:04:18.205 EAL: Main lcore 0 is ready (tid=7fc411ae6a00;cpuset=[0]) 00:04:18.205 EAL: Trying to obtain current memory policy. 00:04:18.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.205 EAL: Restoring previous memory policy: 0 00:04:18.205 EAL: request: mp_malloc_sync 00:04:18.205 EAL: No shared files mode enabled, IPC is disabled 00:04:18.205 EAL: Heap on socket 0 was expanded by 2MB 00:04:18.205 EAL: No shared files mode enabled, IPC is disabled 00:04:18.205 EAL: Mem event callback 'spdk:(nil)' registered 00:04:18.205 00:04:18.205 00:04:18.205 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.205 http://cunit.sourceforge.net/ 00:04:18.205 00:04:18.205 00:04:18.205 Suite: components_suite 00:04:18.205 Test: vtophys_malloc_test ...passed 00:04:18.205 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:18.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.205 EAL: Restoring previous memory policy: 4 00:04:18.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.205 EAL: request: mp_malloc_sync 00:04:18.205 EAL: No shared files mode enabled, IPC is disabled 00:04:18.205 EAL: Heap on socket 0 was expanded by 4MB 00:04:18.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.205 EAL: request: mp_malloc_sync 00:04:18.205 EAL: No shared files mode enabled, IPC is disabled 00:04:18.205 EAL: Heap on socket 0 was shrunk by 4MB 00:04:18.205 EAL: Trying to obtain current memory policy. 00:04:18.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.205 EAL: Restoring previous memory policy: 4 00:04:18.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.205 EAL: request: mp_malloc_sync 00:04:18.205 EAL: No shared files mode enabled, IPC is disabled 00:04:18.205 EAL: Heap on socket 0 was expanded by 6MB 00:04:18.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.205 EAL: request: mp_malloc_sync 00:04:18.205 EAL: No shared files mode enabled, IPC is disabled 00:04:18.205 EAL: Heap on socket 0 was shrunk by 6MB 00:04:18.205 EAL: Trying to obtain current memory policy. 00:04:18.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.205 EAL: Restoring previous memory policy: 4 00:04:18.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.205 EAL: request: mp_malloc_sync 00:04:18.205 EAL: No shared files mode enabled, IPC is disabled 00:04:18.205 EAL: Heap on socket 0 was expanded by 10MB 00:04:18.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.205 EAL: request: mp_malloc_sync 00:04:18.205 EAL: No shared files mode enabled, IPC is disabled 00:04:18.205 EAL: Heap on socket 0 was shrunk by 10MB 00:04:18.205 EAL: Trying to obtain current memory policy. 00:04:18.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.205 EAL: Restoring previous memory policy: 4 00:04:18.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.205 EAL: request: mp_malloc_sync 00:04:18.205 EAL: No shared files mode enabled, IPC is disabled 00:04:18.205 EAL: Heap on socket 0 was expanded by 18MB 00:04:18.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.205 EAL: request: mp_malloc_sync 00:04:18.205 EAL: No shared files mode enabled, IPC is disabled 00:04:18.205 EAL: Heap on socket 0 was shrunk by 18MB 00:04:18.205 EAL: Trying to obtain current memory policy. 00:04:18.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.205 EAL: Restoring previous memory policy: 4 00:04:18.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.205 EAL: request: mp_malloc_sync 00:04:18.205 EAL: No shared files mode enabled, IPC is disabled 00:04:18.205 EAL: Heap on socket 0 was expanded by 34MB 00:04:18.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.464 EAL: request: mp_malloc_sync 00:04:18.464 EAL: No shared files mode enabled, IPC is disabled 00:04:18.464 EAL: Heap on socket 0 was shrunk by 34MB 00:04:18.464 EAL: Trying to obtain current memory policy. 00:04:18.464 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.464 EAL: Restoring previous memory policy: 4 00:04:18.464 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.464 EAL: request: mp_malloc_sync 00:04:18.464 EAL: No shared files mode enabled, IPC is disabled 00:04:18.464 EAL: Heap on socket 0 was expanded by 66MB 00:04:18.464 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.464 EAL: request: mp_malloc_sync 00:04:18.464 EAL: No shared files mode enabled, IPC is disabled 00:04:18.464 EAL: Heap on socket 0 was shrunk by 66MB 00:04:18.464 EAL: Trying to obtain current memory policy. 00:04:18.464 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.464 EAL: Restoring previous memory policy: 4 00:04:18.464 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.464 EAL: request: mp_malloc_sync 00:04:18.464 EAL: No shared files mode enabled, IPC is disabled 00:04:18.464 EAL: Heap on socket 0 was expanded by 130MB 00:04:18.464 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.464 EAL: request: mp_malloc_sync 00:04:18.464 EAL: No shared files mode enabled, IPC is disabled 00:04:18.464 EAL: Heap on socket 0 was shrunk by 130MB 00:04:18.464 EAL: Trying to obtain current memory policy. 00:04:18.464 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.464 EAL: Restoring previous memory policy: 4 00:04:18.464 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.464 EAL: request: mp_malloc_sync 00:04:18.464 EAL: No shared files mode enabled, IPC is disabled 00:04:18.464 EAL: Heap on socket 0 was expanded by 258MB 00:04:18.464 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.464 EAL: request: mp_malloc_sync 00:04:18.464 EAL: No shared files mode enabled, IPC is disabled 00:04:18.464 EAL: Heap on socket 0 was shrunk by 258MB 00:04:18.464 EAL: Trying to obtain current memory policy. 00:04:18.464 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.723 EAL: Restoring previous memory policy: 4 00:04:18.723 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.723 EAL: request: mp_malloc_sync 00:04:18.723 EAL: No shared files mode enabled, IPC is disabled 00:04:18.723 EAL: Heap on socket 0 was expanded by 514MB 00:04:18.723 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.723 EAL: request: mp_malloc_sync 00:04:18.723 EAL: No shared files mode enabled, IPC is disabled 00:04:18.723 EAL: Heap on socket 0 was shrunk by 514MB 00:04:18.723 EAL: Trying to obtain current memory policy. 00:04:18.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.982 EAL: Restoring previous memory policy: 4 00:04:18.982 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.982 EAL: request: mp_malloc_sync 00:04:18.982 EAL: No shared files mode enabled, IPC is disabled 00:04:18.982 EAL: Heap on socket 0 was expanded by 1026MB 00:04:19.241 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.241 EAL: request: mp_malloc_sync 00:04:19.241 EAL: No shared files mode enabled, IPC is disabled 00:04:19.241 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:19.241 passed 00:04:19.241 00:04:19.241 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.241 suites 1 1 n/a 0 0 00:04:19.241 tests 2 2 2 0 0 00:04:19.241 asserts 497 497 497 0 n/a 00:04:19.241 00:04:19.241 Elapsed time = 0.960 seconds 00:04:19.241 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.241 EAL: request: mp_malloc_sync 00:04:19.241 EAL: No shared files mode enabled, IPC is disabled 00:04:19.241 EAL: Heap on socket 0 was shrunk by 2MB 00:04:19.241 EAL: No shared files mode enabled, IPC is disabled 00:04:19.241 EAL: No shared files mode enabled, IPC is disabled 00:04:19.241 EAL: No shared files mode enabled, IPC is disabled 00:04:19.241 00:04:19.241 real 0m1.078s 00:04:19.241 user 0m0.623s 00:04:19.241 sys 0m0.429s 00:04:19.241 11:40:46 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:19.241 11:40:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:19.241 ************************************ 00:04:19.241 END TEST env_vtophys 00:04:19.241 ************************************ 00:04:19.241 11:40:46 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:19.241 11:40:46 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:19.241 11:40:46 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:19.241 11:40:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.499 ************************************ 00:04:19.499 START TEST env_pci 00:04:19.499 ************************************ 00:04:19.499 11:40:46 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:19.499 00:04:19.499 00:04:19.499 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.499 http://cunit.sourceforge.net/ 00:04:19.499 00:04:19.499 00:04:19.499 Suite: pci 00:04:19.500 Test: pci_hook ...[2024-05-14 11:40:46.376976] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3610472 has claimed it 00:04:19.500 EAL: Cannot find device (10000:00:01.0) 00:04:19.500 EAL: Failed to attach device on primary process 00:04:19.500 passed 00:04:19.500 00:04:19.500 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.500 suites 1 1 n/a 0 0 00:04:19.500 tests 1 1 1 0 0 00:04:19.500 asserts 25 25 25 0 n/a 00:04:19.500 00:04:19.500 Elapsed time = 0.035 seconds 00:04:19.500 00:04:19.500 real 0m0.054s 00:04:19.500 user 0m0.014s 00:04:19.500 sys 0m0.040s 00:04:19.500 11:40:46 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:19.500 11:40:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:19.500 ************************************ 00:04:19.500 END TEST env_pci 00:04:19.500 ************************************ 00:04:19.500 11:40:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:19.500 11:40:46 env -- env/env.sh@15 -- # uname 00:04:19.500 11:40:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:19.500 11:40:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:19.500 11:40:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:19.500 11:40:46 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:04:19.500 11:40:46 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:19.500 11:40:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.500 ************************************ 00:04:19.500 START TEST env_dpdk_post_init 00:04:19.500 ************************************ 00:04:19.500 11:40:46 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:19.500 EAL: Detected CPU lcores: 112 00:04:19.500 EAL: Detected NUMA nodes: 2 00:04:19.500 EAL: Detected static linkage of DPDK 00:04:19.500 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:19.500 EAL: Selected IOVA mode 'VA' 00:04:19.500 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.500 EAL: VFIO support initialized 00:04:19.500 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:19.758 EAL: Using IOMMU type 1 (Type 1) 00:04:20.325 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:24.514 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:24.514 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:04:24.514 Starting DPDK initialization... 00:04:24.514 Starting SPDK post initialization... 00:04:24.514 SPDK NVMe probe 00:04:24.514 Attaching to 0000:d8:00.0 00:04:24.514 Attached to 0000:d8:00.0 00:04:24.514 Cleaning up... 00:04:24.514 00:04:24.514 real 0m4.695s 00:04:24.514 user 0m3.512s 00:04:24.514 sys 0m0.425s 00:04:24.514 11:40:51 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.514 11:40:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.514 ************************************ 00:04:24.514 END TEST env_dpdk_post_init 00:04:24.514 ************************************ 00:04:24.514 11:40:51 env -- env/env.sh@26 -- # uname 00:04:24.514 11:40:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:24.514 11:40:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.515 11:40:51 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.515 11:40:51 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.515 11:40:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.515 ************************************ 00:04:24.515 START TEST env_mem_callbacks 00:04:24.515 ************************************ 00:04:24.515 11:40:51 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.515 EAL: Detected CPU lcores: 112 00:04:24.515 EAL: Detected NUMA nodes: 2 00:04:24.515 EAL: Detected static linkage of DPDK 00:04:24.515 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.515 EAL: Selected IOVA mode 'VA' 00:04:24.515 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.515 EAL: VFIO support initialized 00:04:24.515 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.515 00:04:24.515 00:04:24.515 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.515 http://cunit.sourceforge.net/ 00:04:24.515 00:04:24.515 00:04:24.515 Suite: memory 00:04:24.515 Test: test ... 00:04:24.515 register 0x200000200000 2097152 00:04:24.515 malloc 3145728 00:04:24.515 register 0x200000400000 4194304 00:04:24.515 buf 0x200000500000 len 3145728 PASSED 00:04:24.515 malloc 64 00:04:24.515 buf 0x2000004fff40 len 64 PASSED 00:04:24.515 malloc 4194304 00:04:24.515 register 0x200000800000 6291456 00:04:24.515 buf 0x200000a00000 len 4194304 PASSED 00:04:24.515 free 0x200000500000 3145728 00:04:24.515 free 0x2000004fff40 64 00:04:24.515 unregister 0x200000400000 4194304 PASSED 00:04:24.515 free 0x200000a00000 4194304 00:04:24.515 unregister 0x200000800000 6291456 PASSED 00:04:24.515 malloc 8388608 00:04:24.515 register 0x200000400000 10485760 00:04:24.515 buf 0x200000600000 len 8388608 PASSED 00:04:24.515 free 0x200000600000 8388608 00:04:24.515 unregister 0x200000400000 10485760 PASSED 00:04:24.515 passed 00:04:24.515 00:04:24.515 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.515 suites 1 1 n/a 0 0 00:04:24.515 tests 1 1 1 0 0 00:04:24.515 asserts 15 15 15 0 n/a 00:04:24.515 00:04:24.515 Elapsed time = 0.005 seconds 00:04:24.515 00:04:24.515 real 0m0.065s 00:04:24.515 user 0m0.025s 00:04:24.515 sys 0m0.040s 00:04:24.515 11:40:51 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.515 11:40:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:24.515 ************************************ 00:04:24.515 END TEST env_mem_callbacks 00:04:24.515 ************************************ 00:04:24.515 00:04:24.515 real 0m6.534s 00:04:24.515 user 0m4.456s 00:04:24.515 sys 0m1.310s 00:04:24.515 11:40:51 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.515 11:40:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.515 ************************************ 00:04:24.515 END TEST env 00:04:24.515 ************************************ 00:04:24.515 11:40:51 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:24.515 11:40:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.515 11:40:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.515 11:40:51 -- common/autotest_common.sh@10 -- # set +x 00:04:24.515 ************************************ 00:04:24.515 START TEST rpc 00:04:24.515 ************************************ 00:04:24.515 11:40:51 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:24.515 * Looking for test storage... 00:04:24.515 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:24.773 11:40:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3611503 00:04:24.773 11:40:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.773 11:40:51 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:24.773 11:40:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3611503 00:04:24.773 11:40:51 rpc -- common/autotest_common.sh@827 -- # '[' -z 3611503 ']' 00:04:24.773 11:40:51 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.774 11:40:51 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:24.774 11:40:51 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.774 11:40:51 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:24.774 11:40:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.774 [2024-05-14 11:40:51.632146] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:04:24.774 [2024-05-14 11:40:51.632242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3611503 ] 00:04:24.774 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.774 [2024-05-14 11:40:51.701538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.774 [2024-05-14 11:40:51.780123] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:24.774 [2024-05-14 11:40:51.780160] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3611503' to capture a snapshot of events at runtime. 00:04:24.774 [2024-05-14 11:40:51.780169] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:24.774 [2024-05-14 11:40:51.780178] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:24.774 [2024-05-14 11:40:51.780186] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3611503 for offline analysis/debug. 00:04:24.774 [2024-05-14 11:40:51.780206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.720 11:40:52 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:25.720 11:40:52 rpc -- common/autotest_common.sh@860 -- # return 0 00:04:25.720 11:40:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:25.720 11:40:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:25.720 11:40:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:25.720 11:40:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:25.720 11:40:52 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.720 11:40:52 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.720 11:40:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.720 ************************************ 00:04:25.720 START TEST rpc_integrity 00:04:25.720 ************************************ 00:04:25.720 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:25.720 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:25.720 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.720 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.720 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.720 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:25.720 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:25.720 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:25.720 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:25.720 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.720 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:25.721 { 00:04:25.721 "name": "Malloc0", 00:04:25.721 "aliases": [ 00:04:25.721 "4064f6d1-b5b7-4086-a21d-9d1fc6246898" 00:04:25.721 ], 00:04:25.721 "product_name": "Malloc disk", 00:04:25.721 "block_size": 512, 00:04:25.721 "num_blocks": 16384, 00:04:25.721 "uuid": "4064f6d1-b5b7-4086-a21d-9d1fc6246898", 00:04:25.721 "assigned_rate_limits": { 00:04:25.721 "rw_ios_per_sec": 0, 00:04:25.721 "rw_mbytes_per_sec": 0, 00:04:25.721 "r_mbytes_per_sec": 0, 00:04:25.721 "w_mbytes_per_sec": 0 00:04:25.721 }, 00:04:25.721 "claimed": false, 00:04:25.721 "zoned": false, 00:04:25.721 "supported_io_types": { 00:04:25.721 "read": true, 00:04:25.721 "write": true, 00:04:25.721 "unmap": true, 00:04:25.721 "write_zeroes": true, 00:04:25.721 "flush": true, 00:04:25.721 "reset": true, 00:04:25.721 "compare": false, 00:04:25.721 "compare_and_write": false, 00:04:25.721 "abort": true, 00:04:25.721 "nvme_admin": false, 00:04:25.721 "nvme_io": false 00:04:25.721 }, 00:04:25.721 "memory_domains": [ 00:04:25.721 { 00:04:25.721 "dma_device_id": "system", 00:04:25.721 "dma_device_type": 1 00:04:25.721 }, 00:04:25.721 { 00:04:25.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.721 "dma_device_type": 2 00:04:25.721 } 00:04:25.721 ], 00:04:25.721 "driver_specific": {} 00:04:25.721 } 00:04:25.721 ]' 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.721 [2024-05-14 11:40:52.606914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:25.721 [2024-05-14 11:40:52.606947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:25.721 [2024-05-14 11:40:52.606964] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x48e1ea0 00:04:25.721 [2024-05-14 11:40:52.606973] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:25.721 [2024-05-14 11:40:52.607764] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:25.721 [2024-05-14 11:40:52.607787] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:25.721 Passthru0 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:25.721 { 00:04:25.721 "name": "Malloc0", 00:04:25.721 "aliases": [ 00:04:25.721 "4064f6d1-b5b7-4086-a21d-9d1fc6246898" 00:04:25.721 ], 00:04:25.721 "product_name": "Malloc disk", 00:04:25.721 "block_size": 512, 00:04:25.721 "num_blocks": 16384, 00:04:25.721 "uuid": "4064f6d1-b5b7-4086-a21d-9d1fc6246898", 00:04:25.721 "assigned_rate_limits": { 00:04:25.721 "rw_ios_per_sec": 0, 00:04:25.721 "rw_mbytes_per_sec": 0, 00:04:25.721 "r_mbytes_per_sec": 0, 00:04:25.721 "w_mbytes_per_sec": 0 00:04:25.721 }, 00:04:25.721 "claimed": true, 00:04:25.721 "claim_type": "exclusive_write", 00:04:25.721 "zoned": false, 00:04:25.721 "supported_io_types": { 00:04:25.721 "read": true, 00:04:25.721 "write": true, 00:04:25.721 "unmap": true, 00:04:25.721 "write_zeroes": true, 00:04:25.721 "flush": true, 00:04:25.721 "reset": true, 00:04:25.721 "compare": false, 00:04:25.721 "compare_and_write": false, 00:04:25.721 "abort": true, 00:04:25.721 "nvme_admin": false, 00:04:25.721 "nvme_io": false 00:04:25.721 }, 00:04:25.721 "memory_domains": [ 00:04:25.721 { 00:04:25.721 "dma_device_id": "system", 00:04:25.721 "dma_device_type": 1 00:04:25.721 }, 00:04:25.721 { 00:04:25.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.721 "dma_device_type": 2 00:04:25.721 } 00:04:25.721 ], 00:04:25.721 "driver_specific": {} 00:04:25.721 }, 00:04:25.721 { 00:04:25.721 "name": "Passthru0", 00:04:25.721 "aliases": [ 00:04:25.721 "9f06dcba-6481-513f-8c1e-3f1fbb1c9637" 00:04:25.721 ], 00:04:25.721 "product_name": "passthru", 00:04:25.721 "block_size": 512, 00:04:25.721 "num_blocks": 16384, 00:04:25.721 "uuid": "9f06dcba-6481-513f-8c1e-3f1fbb1c9637", 00:04:25.721 "assigned_rate_limits": { 00:04:25.721 "rw_ios_per_sec": 0, 00:04:25.721 "rw_mbytes_per_sec": 0, 00:04:25.721 "r_mbytes_per_sec": 0, 00:04:25.721 "w_mbytes_per_sec": 0 00:04:25.721 }, 00:04:25.721 "claimed": false, 00:04:25.721 "zoned": false, 00:04:25.721 "supported_io_types": { 00:04:25.721 "read": true, 00:04:25.721 "write": true, 00:04:25.721 "unmap": true, 00:04:25.721 "write_zeroes": true, 00:04:25.721 "flush": true, 00:04:25.721 "reset": true, 00:04:25.721 "compare": false, 00:04:25.721 "compare_and_write": false, 00:04:25.721 "abort": true, 00:04:25.721 "nvme_admin": false, 00:04:25.721 "nvme_io": false 00:04:25.721 }, 00:04:25.721 "memory_domains": [ 00:04:25.721 { 00:04:25.721 "dma_device_id": "system", 00:04:25.721 "dma_device_type": 1 00:04:25.721 }, 00:04:25.721 { 00:04:25.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.721 "dma_device_type": 2 00:04:25.721 } 00:04:25.721 ], 00:04:25.721 "driver_specific": { 00:04:25.721 "passthru": { 00:04:25.721 "name": "Passthru0", 00:04:25.721 "base_bdev_name": "Malloc0" 00:04:25.721 } 00:04:25.721 } 00:04:25.721 } 00:04:25.721 ]' 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:25.721 11:40:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:25.721 00:04:25.721 real 0m0.272s 00:04:25.721 user 0m0.167s 00:04:25.721 sys 0m0.045s 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:25.721 11:40:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.721 ************************************ 00:04:25.721 END TEST rpc_integrity 00:04:25.721 ************************************ 00:04:25.721 11:40:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:25.721 11:40:52 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.721 11:40:52 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.721 11:40:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.979 ************************************ 00:04:25.979 START TEST rpc_plugins 00:04:25.979 ************************************ 00:04:25.979 11:40:52 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:04:25.979 11:40:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:25.979 11:40:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.979 11:40:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.980 11:40:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.980 11:40:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:25.980 11:40:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:25.980 11:40:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.980 11:40:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.980 11:40:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.980 11:40:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:25.980 { 00:04:25.980 "name": "Malloc1", 00:04:25.980 "aliases": [ 00:04:25.980 "af808694-b6e0-4a76-a6e6-8cff8b663027" 00:04:25.980 ], 00:04:25.980 "product_name": "Malloc disk", 00:04:25.980 "block_size": 4096, 00:04:25.980 "num_blocks": 256, 00:04:25.980 "uuid": "af808694-b6e0-4a76-a6e6-8cff8b663027", 00:04:25.980 "assigned_rate_limits": { 00:04:25.980 "rw_ios_per_sec": 0, 00:04:25.980 "rw_mbytes_per_sec": 0, 00:04:25.980 "r_mbytes_per_sec": 0, 00:04:25.980 "w_mbytes_per_sec": 0 00:04:25.980 }, 00:04:25.980 "claimed": false, 00:04:25.980 "zoned": false, 00:04:25.980 "supported_io_types": { 00:04:25.980 "read": true, 00:04:25.980 "write": true, 00:04:25.980 "unmap": true, 00:04:25.980 "write_zeroes": true, 00:04:25.980 "flush": true, 00:04:25.980 "reset": true, 00:04:25.980 "compare": false, 00:04:25.980 "compare_and_write": false, 00:04:25.980 "abort": true, 00:04:25.980 "nvme_admin": false, 00:04:25.980 "nvme_io": false 00:04:25.980 }, 00:04:25.980 "memory_domains": [ 00:04:25.980 { 00:04:25.980 "dma_device_id": "system", 00:04:25.980 "dma_device_type": 1 00:04:25.980 }, 00:04:25.980 { 00:04:25.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.980 "dma_device_type": 2 00:04:25.980 } 00:04:25.980 ], 00:04:25.980 "driver_specific": {} 00:04:25.980 } 00:04:25.980 ]' 00:04:25.980 11:40:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:25.980 11:40:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:25.980 11:40:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:25.980 11:40:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.980 11:40:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.980 11:40:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.980 11:40:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:25.980 11:40:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.980 11:40:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.980 11:40:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.980 11:40:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:25.980 11:40:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:25.980 11:40:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:25.980 00:04:25.980 real 0m0.142s 00:04:25.980 user 0m0.084s 00:04:25.980 sys 0m0.026s 00:04:25.980 11:40:52 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:25.980 11:40:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.980 ************************************ 00:04:25.980 END TEST rpc_plugins 00:04:25.980 ************************************ 00:04:25.980 11:40:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:25.980 11:40:53 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.980 11:40:53 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.980 11:40:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.980 ************************************ 00:04:25.980 START TEST rpc_trace_cmd_test 00:04:25.980 ************************************ 00:04:25.980 11:40:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:04:25.980 11:40:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:25.980 11:40:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:25.980 11:40:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.980 11:40:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.239 11:40:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.239 11:40:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:26.239 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3611503", 00:04:26.239 "tpoint_group_mask": "0x8", 00:04:26.239 "iscsi_conn": { 00:04:26.239 "mask": "0x2", 00:04:26.239 "tpoint_mask": "0x0" 00:04:26.239 }, 00:04:26.239 "scsi": { 00:04:26.239 "mask": "0x4", 00:04:26.239 "tpoint_mask": "0x0" 00:04:26.239 }, 00:04:26.239 "bdev": { 00:04:26.239 "mask": "0x8", 00:04:26.239 "tpoint_mask": "0xffffffffffffffff" 00:04:26.239 }, 00:04:26.239 "nvmf_rdma": { 00:04:26.239 "mask": "0x10", 00:04:26.239 "tpoint_mask": "0x0" 00:04:26.239 }, 00:04:26.239 "nvmf_tcp": { 00:04:26.239 "mask": "0x20", 00:04:26.239 "tpoint_mask": "0x0" 00:04:26.239 }, 00:04:26.239 "ftl": { 00:04:26.239 "mask": "0x40", 00:04:26.239 "tpoint_mask": "0x0" 00:04:26.239 }, 00:04:26.239 "blobfs": { 00:04:26.239 "mask": "0x80", 00:04:26.239 "tpoint_mask": "0x0" 00:04:26.239 }, 00:04:26.239 "dsa": { 00:04:26.239 "mask": "0x200", 00:04:26.239 "tpoint_mask": "0x0" 00:04:26.239 }, 00:04:26.239 "thread": { 00:04:26.239 "mask": "0x400", 00:04:26.239 "tpoint_mask": "0x0" 00:04:26.239 }, 00:04:26.239 "nvme_pcie": { 00:04:26.239 "mask": "0x800", 00:04:26.239 "tpoint_mask": "0x0" 00:04:26.239 }, 00:04:26.239 "iaa": { 00:04:26.239 "mask": "0x1000", 00:04:26.239 "tpoint_mask": "0x0" 00:04:26.239 }, 00:04:26.239 "nvme_tcp": { 00:04:26.239 "mask": "0x2000", 00:04:26.239 "tpoint_mask": "0x0" 00:04:26.239 }, 00:04:26.239 "bdev_nvme": { 00:04:26.239 "mask": "0x4000", 00:04:26.239 "tpoint_mask": "0x0" 00:04:26.239 }, 00:04:26.239 "sock": { 00:04:26.239 "mask": "0x8000", 00:04:26.239 "tpoint_mask": "0x0" 00:04:26.239 } 00:04:26.239 }' 00:04:26.239 11:40:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:26.239 11:40:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:26.239 11:40:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:26.239 11:40:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:26.239 11:40:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:26.239 11:40:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:26.239 11:40:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:26.239 11:40:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:26.239 11:40:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:26.239 11:40:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:26.239 00:04:26.239 real 0m0.231s 00:04:26.239 user 0m0.201s 00:04:26.239 sys 0m0.021s 00:04:26.239 11:40:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:26.239 11:40:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.239 ************************************ 00:04:26.239 END TEST rpc_trace_cmd_test 00:04:26.239 ************************************ 00:04:26.498 11:40:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:26.498 11:40:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:26.498 11:40:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:26.498 11:40:53 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:26.498 11:40:53 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:26.498 11:40:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.498 ************************************ 00:04:26.498 START TEST rpc_daemon_integrity 00:04:26.498 ************************************ 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:26.498 { 00:04:26.498 "name": "Malloc2", 00:04:26.498 "aliases": [ 00:04:26.498 "02f2fd35-3f05-491f-b411-6a6a008817aa" 00:04:26.498 ], 00:04:26.498 "product_name": "Malloc disk", 00:04:26.498 "block_size": 512, 00:04:26.498 "num_blocks": 16384, 00:04:26.498 "uuid": "02f2fd35-3f05-491f-b411-6a6a008817aa", 00:04:26.498 "assigned_rate_limits": { 00:04:26.498 "rw_ios_per_sec": 0, 00:04:26.498 "rw_mbytes_per_sec": 0, 00:04:26.498 "r_mbytes_per_sec": 0, 00:04:26.498 "w_mbytes_per_sec": 0 00:04:26.498 }, 00:04:26.498 "claimed": false, 00:04:26.498 "zoned": false, 00:04:26.498 "supported_io_types": { 00:04:26.498 "read": true, 00:04:26.498 "write": true, 00:04:26.498 "unmap": true, 00:04:26.498 "write_zeroes": true, 00:04:26.498 "flush": true, 00:04:26.498 "reset": true, 00:04:26.498 "compare": false, 00:04:26.498 "compare_and_write": false, 00:04:26.498 "abort": true, 00:04:26.498 "nvme_admin": false, 00:04:26.498 "nvme_io": false 00:04:26.498 }, 00:04:26.498 "memory_domains": [ 00:04:26.498 { 00:04:26.498 "dma_device_id": "system", 00:04:26.498 "dma_device_type": 1 00:04:26.498 }, 00:04:26.498 { 00:04:26.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.498 "dma_device_type": 2 00:04:26.498 } 00:04:26.498 ], 00:04:26.498 "driver_specific": {} 00:04:26.498 } 00:04:26.498 ]' 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.498 [2024-05-14 11:40:53.517251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:26.498 [2024-05-14 11:40:53.517281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:26.498 [2024-05-14 11:40:53.517296] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x48d4430 00:04:26.498 [2024-05-14 11:40:53.517305] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:26.498 [2024-05-14 11:40:53.518002] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:26.498 [2024-05-14 11:40:53.518023] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:26.498 Passthru0 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.498 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.499 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.499 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:26.499 { 00:04:26.499 "name": "Malloc2", 00:04:26.499 "aliases": [ 00:04:26.499 "02f2fd35-3f05-491f-b411-6a6a008817aa" 00:04:26.499 ], 00:04:26.499 "product_name": "Malloc disk", 00:04:26.499 "block_size": 512, 00:04:26.499 "num_blocks": 16384, 00:04:26.499 "uuid": "02f2fd35-3f05-491f-b411-6a6a008817aa", 00:04:26.499 "assigned_rate_limits": { 00:04:26.499 "rw_ios_per_sec": 0, 00:04:26.499 "rw_mbytes_per_sec": 0, 00:04:26.499 "r_mbytes_per_sec": 0, 00:04:26.499 "w_mbytes_per_sec": 0 00:04:26.499 }, 00:04:26.499 "claimed": true, 00:04:26.499 "claim_type": "exclusive_write", 00:04:26.499 "zoned": false, 00:04:26.499 "supported_io_types": { 00:04:26.499 "read": true, 00:04:26.499 "write": true, 00:04:26.499 "unmap": true, 00:04:26.499 "write_zeroes": true, 00:04:26.499 "flush": true, 00:04:26.499 "reset": true, 00:04:26.499 "compare": false, 00:04:26.499 "compare_and_write": false, 00:04:26.499 "abort": true, 00:04:26.499 "nvme_admin": false, 00:04:26.499 "nvme_io": false 00:04:26.499 }, 00:04:26.499 "memory_domains": [ 00:04:26.499 { 00:04:26.499 "dma_device_id": "system", 00:04:26.499 "dma_device_type": 1 00:04:26.499 }, 00:04:26.499 { 00:04:26.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.499 "dma_device_type": 2 00:04:26.499 } 00:04:26.499 ], 00:04:26.499 "driver_specific": {} 00:04:26.499 }, 00:04:26.499 { 00:04:26.499 "name": "Passthru0", 00:04:26.499 "aliases": [ 00:04:26.499 "34460f58-dcb4-5d39-aae1-9315bceb8d5e" 00:04:26.499 ], 00:04:26.499 "product_name": "passthru", 00:04:26.499 "block_size": 512, 00:04:26.499 "num_blocks": 16384, 00:04:26.499 "uuid": "34460f58-dcb4-5d39-aae1-9315bceb8d5e", 00:04:26.499 "assigned_rate_limits": { 00:04:26.499 "rw_ios_per_sec": 0, 00:04:26.499 "rw_mbytes_per_sec": 0, 00:04:26.499 "r_mbytes_per_sec": 0, 00:04:26.499 "w_mbytes_per_sec": 0 00:04:26.499 }, 00:04:26.499 "claimed": false, 00:04:26.499 "zoned": false, 00:04:26.499 "supported_io_types": { 00:04:26.499 "read": true, 00:04:26.499 "write": true, 00:04:26.499 "unmap": true, 00:04:26.499 "write_zeroes": true, 00:04:26.499 "flush": true, 00:04:26.499 "reset": true, 00:04:26.499 "compare": false, 00:04:26.499 "compare_and_write": false, 00:04:26.499 "abort": true, 00:04:26.499 "nvme_admin": false, 00:04:26.499 "nvme_io": false 00:04:26.499 }, 00:04:26.499 "memory_domains": [ 00:04:26.499 { 00:04:26.499 "dma_device_id": "system", 00:04:26.499 "dma_device_type": 1 00:04:26.499 }, 00:04:26.499 { 00:04:26.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.499 "dma_device_type": 2 00:04:26.499 } 00:04:26.499 ], 00:04:26.499 "driver_specific": { 00:04:26.499 "passthru": { 00:04:26.499 "name": "Passthru0", 00:04:26.499 "base_bdev_name": "Malloc2" 00:04:26.499 } 00:04:26.499 } 00:04:26.499 } 00:04:26.499 ]' 00:04:26.499 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:26.757 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:26.758 00:04:26.758 real 0m0.288s 00:04:26.758 user 0m0.177s 00:04:26.758 sys 0m0.051s 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:26.758 11:40:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.758 ************************************ 00:04:26.758 END TEST rpc_daemon_integrity 00:04:26.758 ************************************ 00:04:26.758 11:40:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:26.758 11:40:53 rpc -- rpc/rpc.sh@84 -- # killprocess 3611503 00:04:26.758 11:40:53 rpc -- common/autotest_common.sh@946 -- # '[' -z 3611503 ']' 00:04:26.758 11:40:53 rpc -- common/autotest_common.sh@950 -- # kill -0 3611503 00:04:26.758 11:40:53 rpc -- common/autotest_common.sh@951 -- # uname 00:04:26.758 11:40:53 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:26.758 11:40:53 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3611503 00:04:26.758 11:40:53 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:26.758 11:40:53 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:26.758 11:40:53 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3611503' 00:04:26.758 killing process with pid 3611503 00:04:26.758 11:40:53 rpc -- common/autotest_common.sh@965 -- # kill 3611503 00:04:26.758 11:40:53 rpc -- common/autotest_common.sh@970 -- # wait 3611503 00:04:27.016 00:04:27.016 real 0m2.574s 00:04:27.016 user 0m3.263s 00:04:27.016 sys 0m0.796s 00:04:27.016 11:40:54 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:27.016 11:40:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.016 ************************************ 00:04:27.016 END TEST rpc 00:04:27.016 ************************************ 00:04:27.275 11:40:54 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:27.275 11:40:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:27.275 11:40:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:27.275 11:40:54 -- common/autotest_common.sh@10 -- # set +x 00:04:27.275 ************************************ 00:04:27.275 START TEST skip_rpc 00:04:27.275 ************************************ 00:04:27.275 11:40:54 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:27.275 * Looking for test storage... 00:04:27.275 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:27.275 11:40:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:27.275 11:40:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:27.275 11:40:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:27.275 11:40:54 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:27.275 11:40:54 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:27.275 11:40:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.275 ************************************ 00:04:27.275 START TEST skip_rpc 00:04:27.275 ************************************ 00:04:27.275 11:40:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:04:27.275 11:40:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3612206 00:04:27.275 11:40:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.275 11:40:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:27.275 11:40:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:27.275 [2024-05-14 11:40:54.334792] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:04:27.275 [2024-05-14 11:40:54.334861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3612206 ] 00:04:27.533 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.533 [2024-05-14 11:40:54.402518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.533 [2024-05-14 11:40:54.473478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3612206 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3612206 ']' 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3612206 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3612206 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3612206' 00:04:32.795 killing process with pid 3612206 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3612206 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3612206 00:04:32.795 00:04:32.795 real 0m5.365s 00:04:32.795 user 0m5.133s 00:04:32.795 sys 0m0.272s 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:32.795 11:40:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.795 ************************************ 00:04:32.795 END TEST skip_rpc 00:04:32.795 ************************************ 00:04:32.795 11:40:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:32.795 11:40:59 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.795 11:40:59 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.795 11:40:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.795 ************************************ 00:04:32.795 START TEST skip_rpc_with_json 00:04:32.795 ************************************ 00:04:32.795 11:40:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:04:32.795 11:40:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:32.795 11:40:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3613043 00:04:32.795 11:40:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.795 11:40:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.795 11:40:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3613043 00:04:32.796 11:40:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3613043 ']' 00:04:32.796 11:40:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.796 11:40:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:32.796 11:40:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.796 11:40:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:32.796 11:40:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.796 [2024-05-14 11:40:59.792306] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:04:32.796 [2024-05-14 11:40:59.792367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3613043 ] 00:04:32.796 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.796 [2024-05-14 11:40:59.863602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.054 [2024-05-14 11:40:59.943429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.621 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:33.621 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:04:33.621 11:41:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:33.621 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.621 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.621 [2024-05-14 11:41:00.619990] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:33.621 request: 00:04:33.621 { 00:04:33.621 "trtype": "tcp", 00:04:33.621 "method": "nvmf_get_transports", 00:04:33.621 "req_id": 1 00:04:33.621 } 00:04:33.621 Got JSON-RPC error response 00:04:33.621 response: 00:04:33.621 { 00:04:33.621 "code": -19, 00:04:33.621 "message": "No such device" 00:04:33.621 } 00:04:33.621 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:33.621 11:41:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:33.621 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.621 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.621 [2024-05-14 11:41:00.628071] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.621 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.621 11:41:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:33.621 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.621 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.881 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.881 11:41:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:33.881 { 00:04:33.881 "subsystems": [ 00:04:33.881 { 00:04:33.881 "subsystem": "scheduler", 00:04:33.881 "config": [ 00:04:33.881 { 00:04:33.881 "method": "framework_set_scheduler", 00:04:33.881 "params": { 00:04:33.881 "name": "static" 00:04:33.881 } 00:04:33.881 } 00:04:33.881 ] 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "subsystem": "vmd", 00:04:33.881 "config": [] 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "subsystem": "sock", 00:04:33.881 "config": [ 00:04:33.881 { 00:04:33.881 "method": "sock_impl_set_options", 00:04:33.881 "params": { 00:04:33.881 "impl_name": "posix", 00:04:33.881 "recv_buf_size": 2097152, 00:04:33.881 "send_buf_size": 2097152, 00:04:33.881 "enable_recv_pipe": true, 00:04:33.881 "enable_quickack": false, 00:04:33.881 "enable_placement_id": 0, 00:04:33.881 "enable_zerocopy_send_server": true, 00:04:33.881 "enable_zerocopy_send_client": false, 00:04:33.881 "zerocopy_threshold": 0, 00:04:33.881 "tls_version": 0, 00:04:33.881 "enable_ktls": false 00:04:33.881 } 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "method": "sock_impl_set_options", 00:04:33.881 "params": { 00:04:33.881 "impl_name": "ssl", 00:04:33.881 "recv_buf_size": 4096, 00:04:33.881 "send_buf_size": 4096, 00:04:33.881 "enable_recv_pipe": true, 00:04:33.881 "enable_quickack": false, 00:04:33.881 "enable_placement_id": 0, 00:04:33.881 "enable_zerocopy_send_server": true, 00:04:33.881 "enable_zerocopy_send_client": false, 00:04:33.881 "zerocopy_threshold": 0, 00:04:33.881 "tls_version": 0, 00:04:33.881 "enable_ktls": false 00:04:33.881 } 00:04:33.881 } 00:04:33.881 ] 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "subsystem": "iobuf", 00:04:33.881 "config": [ 00:04:33.881 { 00:04:33.881 "method": "iobuf_set_options", 00:04:33.881 "params": { 00:04:33.881 "small_pool_count": 8192, 00:04:33.881 "large_pool_count": 1024, 00:04:33.881 "small_bufsize": 8192, 00:04:33.881 "large_bufsize": 135168 00:04:33.881 } 00:04:33.881 } 00:04:33.881 ] 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "subsystem": "keyring", 00:04:33.881 "config": [] 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "subsystem": "vfio_user_target", 00:04:33.881 "config": null 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "subsystem": "accel", 00:04:33.881 "config": [ 00:04:33.881 { 00:04:33.881 "method": "accel_set_options", 00:04:33.881 "params": { 00:04:33.881 "small_cache_size": 128, 00:04:33.881 "large_cache_size": 16, 00:04:33.881 "task_count": 2048, 00:04:33.881 "sequence_count": 2048, 00:04:33.881 "buf_count": 2048 00:04:33.881 } 00:04:33.881 } 00:04:33.881 ] 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "subsystem": "bdev", 00:04:33.881 "config": [ 00:04:33.881 { 00:04:33.881 "method": "bdev_set_options", 00:04:33.881 "params": { 00:04:33.881 "bdev_io_pool_size": 65535, 00:04:33.881 "bdev_io_cache_size": 256, 00:04:33.881 "bdev_auto_examine": true, 00:04:33.881 "iobuf_small_cache_size": 128, 00:04:33.881 "iobuf_large_cache_size": 16 00:04:33.881 } 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "method": "bdev_raid_set_options", 00:04:33.881 "params": { 00:04:33.881 "process_window_size_kb": 1024 00:04:33.881 } 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "method": "bdev_nvme_set_options", 00:04:33.881 "params": { 00:04:33.881 "action_on_timeout": "none", 00:04:33.881 "timeout_us": 0, 00:04:33.881 "timeout_admin_us": 0, 00:04:33.881 "keep_alive_timeout_ms": 10000, 00:04:33.881 "arbitration_burst": 0, 00:04:33.881 "low_priority_weight": 0, 00:04:33.881 "medium_priority_weight": 0, 00:04:33.881 "high_priority_weight": 0, 00:04:33.881 "nvme_adminq_poll_period_us": 10000, 00:04:33.881 "nvme_ioq_poll_period_us": 0, 00:04:33.881 "io_queue_requests": 0, 00:04:33.881 "delay_cmd_submit": true, 00:04:33.881 "transport_retry_count": 4, 00:04:33.881 "bdev_retry_count": 3, 00:04:33.881 "transport_ack_timeout": 0, 00:04:33.881 "ctrlr_loss_timeout_sec": 0, 00:04:33.881 "reconnect_delay_sec": 0, 00:04:33.881 "fast_io_fail_timeout_sec": 0, 00:04:33.881 "disable_auto_failback": false, 00:04:33.881 "generate_uuids": false, 00:04:33.881 "transport_tos": 0, 00:04:33.881 "nvme_error_stat": false, 00:04:33.881 "rdma_srq_size": 0, 00:04:33.881 "io_path_stat": false, 00:04:33.881 "allow_accel_sequence": false, 00:04:33.881 "rdma_max_cq_size": 0, 00:04:33.881 "rdma_cm_event_timeout_ms": 0, 00:04:33.881 "dhchap_digests": [ 00:04:33.881 "sha256", 00:04:33.881 "sha384", 00:04:33.881 "sha512" 00:04:33.881 ], 00:04:33.881 "dhchap_dhgroups": [ 00:04:33.881 "null", 00:04:33.881 "ffdhe2048", 00:04:33.881 "ffdhe3072", 00:04:33.881 "ffdhe4096", 00:04:33.881 "ffdhe6144", 00:04:33.881 "ffdhe8192" 00:04:33.881 ] 00:04:33.881 } 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "method": "bdev_nvme_set_hotplug", 00:04:33.881 "params": { 00:04:33.881 "period_us": 100000, 00:04:33.881 "enable": false 00:04:33.881 } 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "method": "bdev_iscsi_set_options", 00:04:33.881 "params": { 00:04:33.881 "timeout_sec": 30 00:04:33.881 } 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "method": "bdev_wait_for_examine" 00:04:33.881 } 00:04:33.881 ] 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "subsystem": "nvmf", 00:04:33.881 "config": [ 00:04:33.881 { 00:04:33.881 "method": "nvmf_set_config", 00:04:33.881 "params": { 00:04:33.881 "discovery_filter": "match_any", 00:04:33.881 "admin_cmd_passthru": { 00:04:33.881 "identify_ctrlr": false 00:04:33.881 } 00:04:33.881 } 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "method": "nvmf_set_max_subsystems", 00:04:33.881 "params": { 00:04:33.881 "max_subsystems": 1024 00:04:33.881 } 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "method": "nvmf_set_crdt", 00:04:33.881 "params": { 00:04:33.881 "crdt1": 0, 00:04:33.881 "crdt2": 0, 00:04:33.881 "crdt3": 0 00:04:33.881 } 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "method": "nvmf_create_transport", 00:04:33.881 "params": { 00:04:33.881 "trtype": "TCP", 00:04:33.881 "max_queue_depth": 128, 00:04:33.881 "max_io_qpairs_per_ctrlr": 127, 00:04:33.881 "in_capsule_data_size": 4096, 00:04:33.881 "max_io_size": 131072, 00:04:33.881 "io_unit_size": 131072, 00:04:33.881 "max_aq_depth": 128, 00:04:33.881 "num_shared_buffers": 511, 00:04:33.881 "buf_cache_size": 4294967295, 00:04:33.881 "dif_insert_or_strip": false, 00:04:33.881 "zcopy": false, 00:04:33.881 "c2h_success": true, 00:04:33.881 "sock_priority": 0, 00:04:33.881 "abort_timeout_sec": 1, 00:04:33.881 "ack_timeout": 0, 00:04:33.881 "data_wr_pool_size": 0 00:04:33.881 } 00:04:33.881 } 00:04:33.881 ] 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "subsystem": "nbd", 00:04:33.881 "config": [] 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "subsystem": "ublk", 00:04:33.881 "config": [] 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "subsystem": "vhost_blk", 00:04:33.881 "config": [] 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "subsystem": "scsi", 00:04:33.881 "config": null 00:04:33.881 }, 00:04:33.881 { 00:04:33.881 "subsystem": "iscsi", 00:04:33.881 "config": [ 00:04:33.881 { 00:04:33.881 "method": "iscsi_set_options", 00:04:33.881 "params": { 00:04:33.881 "node_base": "iqn.2016-06.io.spdk", 00:04:33.881 "max_sessions": 128, 00:04:33.881 "max_connections_per_session": 2, 00:04:33.881 "max_queue_depth": 64, 00:04:33.881 "default_time2wait": 2, 00:04:33.881 "default_time2retain": 20, 00:04:33.881 "first_burst_length": 8192, 00:04:33.881 "immediate_data": true, 00:04:33.881 "allow_duplicated_isid": false, 00:04:33.881 "error_recovery_level": 0, 00:04:33.881 "nop_timeout": 60, 00:04:33.881 "nop_in_interval": 30, 00:04:33.881 "disable_chap": false, 00:04:33.881 "require_chap": false, 00:04:33.881 "mutual_chap": false, 00:04:33.881 "chap_group": 0, 00:04:33.881 "max_large_datain_per_connection": 64, 00:04:33.881 "max_r2t_per_connection": 4, 00:04:33.881 "pdu_pool_size": 36864, 00:04:33.881 "immediate_data_pool_size": 16384, 00:04:33.882 "data_out_pool_size": 2048 00:04:33.882 } 00:04:33.882 } 00:04:33.882 ] 00:04:33.882 }, 00:04:33.882 { 00:04:33.882 "subsystem": "vhost_scsi", 00:04:33.882 "config": [] 00:04:33.882 } 00:04:33.882 ] 00:04:33.882 } 00:04:33.882 11:41:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:33.882 11:41:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3613043 00:04:33.882 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3613043 ']' 00:04:33.882 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3613043 00:04:33.882 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:33.882 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:33.882 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3613043 00:04:33.882 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:33.882 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:33.882 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3613043' 00:04:33.882 killing process with pid 3613043 00:04:33.882 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3613043 00:04:33.882 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3613043 00:04:34.141 11:41:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3613323 00:04:34.141 11:41:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:34.141 11:41:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:39.470 11:41:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3613323 00:04:39.470 11:41:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3613323 ']' 00:04:39.470 11:41:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3613323 00:04:39.470 11:41:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:39.470 11:41:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:39.470 11:41:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3613323 00:04:39.470 11:41:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:39.470 11:41:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:39.470 11:41:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3613323' 00:04:39.470 killing process with pid 3613323 00:04:39.470 11:41:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3613323 00:04:39.470 11:41:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3613323 00:04:39.470 11:41:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:39.470 11:41:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:39.470 00:04:39.470 real 0m6.732s 00:04:39.470 user 0m6.527s 00:04:39.470 sys 0m0.628s 00:04:39.470 11:41:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.470 11:41:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.470 ************************************ 00:04:39.470 END TEST skip_rpc_with_json 00:04:39.470 ************************************ 00:04:39.470 11:41:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:39.470 11:41:06 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.470 11:41:06 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.470 11:41:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.730 ************************************ 00:04:39.730 START TEST skip_rpc_with_delay 00:04:39.730 ************************************ 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.730 [2024-05-14 11:41:06.609091] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:39.730 [2024-05-14 11:41:06.609216] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:39.730 00:04:39.730 real 0m0.042s 00:04:39.730 user 0m0.020s 00:04:39.730 sys 0m0.021s 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.730 11:41:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:39.730 ************************************ 00:04:39.730 END TEST skip_rpc_with_delay 00:04:39.730 ************************************ 00:04:39.730 11:41:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:39.730 11:41:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:39.730 11:41:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:39.730 11:41:06 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.730 11:41:06 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.730 11:41:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.730 ************************************ 00:04:39.730 START TEST exit_on_failed_rpc_init 00:04:39.730 ************************************ 00:04:39.730 11:41:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:04:39.730 11:41:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3614447 00:04:39.730 11:41:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3614447 00:04:39.730 11:41:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3614447 ']' 00:04:39.730 11:41:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.730 11:41:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:39.730 11:41:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.730 11:41:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:39.730 11:41:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.730 11:41:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.730 [2024-05-14 11:41:06.735878] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:04:39.730 [2024-05-14 11:41:06.735936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614447 ] 00:04:39.730 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.730 [2024-05-14 11:41:06.804084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.989 [2024-05-14 11:41:06.882915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.556 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:40.556 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:04:40.556 11:41:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.556 11:41:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.556 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:40.556 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.556 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.556 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.556 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.556 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.556 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.556 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:40.556 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.556 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:40.556 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.556 [2024-05-14 11:41:07.574476] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:04:40.556 [2024-05-14 11:41:07.574543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614478 ] 00:04:40.556 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.556 [2024-05-14 11:41:07.642200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.815 [2024-05-14 11:41:07.714548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.815 [2024-05-14 11:41:07.714640] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:40.815 [2024-05-14 11:41:07.714653] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:40.815 [2024-05-14 11:41:07.714661] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3614447 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3614447 ']' 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3614447 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3614447 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3614447' 00:04:40.815 killing process with pid 3614447 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3614447 00:04:40.815 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3614447 00:04:41.075 00:04:41.075 real 0m1.422s 00:04:41.075 user 0m1.589s 00:04:41.075 sys 0m0.429s 00:04:41.075 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.075 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.075 ************************************ 00:04:41.075 END TEST exit_on_failed_rpc_init 00:04:41.075 ************************************ 00:04:41.334 11:41:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:41.334 00:04:41.334 real 0m14.024s 00:04:41.334 user 0m13.434s 00:04:41.334 sys 0m1.657s 00:04:41.334 11:41:08 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.334 11:41:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.334 ************************************ 00:04:41.334 END TEST skip_rpc 00:04:41.334 ************************************ 00:04:41.334 11:41:08 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:41.334 11:41:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.334 11:41:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.334 11:41:08 -- common/autotest_common.sh@10 -- # set +x 00:04:41.334 ************************************ 00:04:41.334 START TEST rpc_client 00:04:41.334 ************************************ 00:04:41.334 11:41:08 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:41.334 * Looking for test storage... 00:04:41.334 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:04:41.334 11:41:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:41.334 OK 00:04:41.334 11:41:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:41.334 00:04:41.334 real 0m0.133s 00:04:41.334 user 0m0.054s 00:04:41.334 sys 0m0.090s 00:04:41.334 11:41:08 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.334 11:41:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:41.334 ************************************ 00:04:41.334 END TEST rpc_client 00:04:41.334 ************************************ 00:04:41.592 11:41:08 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:41.592 11:41:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.593 11:41:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.593 11:41:08 -- common/autotest_common.sh@10 -- # set +x 00:04:41.593 ************************************ 00:04:41.593 START TEST json_config 00:04:41.593 ************************************ 00:04:41.593 11:41:08 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:04:41.593 11:41:08 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:41.593 11:41:08 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.593 11:41:08 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.593 11:41:08 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.593 11:41:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.593 11:41:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.593 11:41:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.593 11:41:08 json_config -- paths/export.sh@5 -- # export PATH 00:04:41.593 11:41:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@47 -- # : 0 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:41.593 11:41:08 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:41.593 11:41:08 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:41.593 11:41:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:41.593 11:41:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:41.593 11:41:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:41.593 11:41:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:41.593 11:41:08 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:41.593 WARNING: No tests are enabled so not running JSON configuration tests 00:04:41.593 11:41:08 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:41.593 00:04:41.593 real 0m0.109s 00:04:41.593 user 0m0.049s 00:04:41.593 sys 0m0.061s 00:04:41.593 11:41:08 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.593 11:41:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.593 ************************************ 00:04:41.593 END TEST json_config 00:04:41.593 ************************************ 00:04:41.593 11:41:08 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:41.593 11:41:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.593 11:41:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.593 11:41:08 -- common/autotest_common.sh@10 -- # set +x 00:04:41.593 ************************************ 00:04:41.593 START TEST json_config_extra_key 00:04:41.593 ************************************ 00:04:41.593 11:41:08 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:41.852 11:41:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:41.852 11:41:08 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.852 11:41:08 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.852 11:41:08 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.852 11:41:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.852 11:41:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.852 11:41:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.852 11:41:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:41.852 11:41:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:41.852 11:41:08 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:41.852 11:41:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:04:41.852 11:41:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:41.853 11:41:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:41.853 11:41:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:41.853 11:41:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:41.853 11:41:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:41.853 11:41:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:41.853 11:41:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:41.853 11:41:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:41.853 11:41:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.853 11:41:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:41.853 INFO: launching applications... 00:04:41.853 11:41:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:41.853 11:41:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:41.853 11:41:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:41.853 11:41:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.853 11:41:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.853 11:41:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.853 11:41:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.853 11:41:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.853 11:41:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3614869 00:04:41.853 11:41:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.853 Waiting for target to run... 00:04:41.853 11:41:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3614869 /var/tmp/spdk_tgt.sock 00:04:41.853 11:41:08 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:04:41.853 11:41:08 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3614869 ']' 00:04:41.853 11:41:08 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.853 11:41:08 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:41.853 11:41:08 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.853 11:41:08 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:41.853 11:41:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:41.853 [2024-05-14 11:41:08.816954] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:04:41.853 [2024-05-14 11:41:08.817018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614869 ] 00:04:41.853 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.420 [2024-05-14 11:41:09.248723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.420 [2024-05-14 11:41:09.336362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.678 11:41:09 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:42.678 11:41:09 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:04:42.678 11:41:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:42.678 00:04:42.678 11:41:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:42.678 INFO: shutting down applications... 00:04:42.678 11:41:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:42.678 11:41:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:42.678 11:41:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:42.678 11:41:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3614869 ]] 00:04:42.678 11:41:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3614869 00:04:42.678 11:41:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:42.678 11:41:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.678 11:41:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3614869 00:04:42.678 11:41:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.246 11:41:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.246 11:41:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.246 11:41:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3614869 00:04:43.246 11:41:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:43.246 11:41:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:43.246 11:41:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:43.246 11:41:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:43.246 SPDK target shutdown done 00:04:43.247 11:41:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:43.247 Success 00:04:43.247 00:04:43.247 real 0m1.454s 00:04:43.247 user 0m1.035s 00:04:43.247 sys 0m0.547s 00:04:43.247 11:41:10 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.247 11:41:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:43.247 ************************************ 00:04:43.247 END TEST json_config_extra_key 00:04:43.247 ************************************ 00:04:43.247 11:41:10 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.247 11:41:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.247 11:41:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.247 11:41:10 -- common/autotest_common.sh@10 -- # set +x 00:04:43.247 ************************************ 00:04:43.247 START TEST alias_rpc 00:04:43.247 ************************************ 00:04:43.247 11:41:10 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.247 * Looking for test storage... 00:04:43.247 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:04:43.247 11:41:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:43.247 11:41:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3615194 00:04:43.247 11:41:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3615194 00:04:43.247 11:41:10 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3615194 ']' 00:04:43.247 11:41:10 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.247 11:41:10 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:43.247 11:41:10 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.247 11:41:10 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:43.247 11:41:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.247 11:41:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.247 [2024-05-14 11:41:10.330389] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:04:43.247 [2024-05-14 11:41:10.330456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3615194 ] 00:04:43.505 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.505 [2024-05-14 11:41:10.398886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.505 [2024-05-14 11:41:10.478188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.071 11:41:11 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:44.071 11:41:11 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:44.071 11:41:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:44.330 11:41:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3615194 00:04:44.330 11:41:11 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3615194 ']' 00:04:44.330 11:41:11 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3615194 00:04:44.330 11:41:11 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:04:44.330 11:41:11 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:44.330 11:41:11 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3615194 00:04:44.330 11:41:11 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:44.330 11:41:11 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:44.330 11:41:11 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3615194' 00:04:44.330 killing process with pid 3615194 00:04:44.330 11:41:11 alias_rpc -- common/autotest_common.sh@965 -- # kill 3615194 00:04:44.330 11:41:11 alias_rpc -- common/autotest_common.sh@970 -- # wait 3615194 00:04:44.899 00:04:44.899 real 0m1.462s 00:04:44.899 user 0m1.554s 00:04:44.899 sys 0m0.432s 00:04:44.899 11:41:11 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.899 11:41:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.899 ************************************ 00:04:44.899 END TEST alias_rpc 00:04:44.899 ************************************ 00:04:44.899 11:41:11 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:44.899 11:41:11 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:44.899 11:41:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.899 11:41:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.899 11:41:11 -- common/autotest_common.sh@10 -- # set +x 00:04:44.899 ************************************ 00:04:44.899 START TEST spdkcli_tcp 00:04:44.899 ************************************ 00:04:44.899 11:41:11 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:44.899 * Looking for test storage... 00:04:44.899 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:04:44.899 11:41:11 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:04:44.899 11:41:11 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:44.899 11:41:11 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:04:44.899 11:41:11 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:44.899 11:41:11 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:44.899 11:41:11 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:44.899 11:41:11 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:44.899 11:41:11 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:44.899 11:41:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.899 11:41:11 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3615513 00:04:44.899 11:41:11 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3615513 00:04:44.899 11:41:11 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:44.899 11:41:11 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3615513 ']' 00:04:44.899 11:41:11 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.899 11:41:11 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:44.899 11:41:11 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.899 11:41:11 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:44.899 11:41:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.899 [2024-05-14 11:41:11.908640] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:04:44.899 [2024-05-14 11:41:11.908707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3615513 ] 00:04:44.899 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.899 [2024-05-14 11:41:11.977566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.157 [2024-05-14 11:41:12.058554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.157 [2024-05-14 11:41:12.058557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.723 11:41:12 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:45.723 11:41:12 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:04:45.723 11:41:12 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3615777 00:04:45.723 11:41:12 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:45.723 11:41:12 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:45.982 [ 00:04:45.982 "spdk_get_version", 00:04:45.982 "rpc_get_methods", 00:04:45.982 "trace_get_info", 00:04:45.982 "trace_get_tpoint_group_mask", 00:04:45.982 "trace_disable_tpoint_group", 00:04:45.982 "trace_enable_tpoint_group", 00:04:45.982 "trace_clear_tpoint_mask", 00:04:45.982 "trace_set_tpoint_mask", 00:04:45.982 "vfu_tgt_set_base_path", 00:04:45.982 "framework_get_pci_devices", 00:04:45.982 "framework_get_config", 00:04:45.982 "framework_get_subsystems", 00:04:45.982 "keyring_get_keys", 00:04:45.982 "iobuf_get_stats", 00:04:45.982 "iobuf_set_options", 00:04:45.982 "sock_get_default_impl", 00:04:45.982 "sock_set_default_impl", 00:04:45.982 "sock_impl_set_options", 00:04:45.982 "sock_impl_get_options", 00:04:45.982 "vmd_rescan", 00:04:45.982 "vmd_remove_device", 00:04:45.982 "vmd_enable", 00:04:45.982 "accel_get_stats", 00:04:45.982 "accel_set_options", 00:04:45.982 "accel_set_driver", 00:04:45.982 "accel_crypto_key_destroy", 00:04:45.982 "accel_crypto_keys_get", 00:04:45.982 "accel_crypto_key_create", 00:04:45.982 "accel_assign_opc", 00:04:45.982 "accel_get_module_info", 00:04:45.982 "accel_get_opc_assignments", 00:04:45.982 "notify_get_notifications", 00:04:45.982 "notify_get_types", 00:04:45.982 "bdev_get_histogram", 00:04:45.982 "bdev_enable_histogram", 00:04:45.982 "bdev_set_qos_limit", 00:04:45.982 "bdev_set_qd_sampling_period", 00:04:45.982 "bdev_get_bdevs", 00:04:45.982 "bdev_reset_iostat", 00:04:45.982 "bdev_get_iostat", 00:04:45.982 "bdev_examine", 00:04:45.982 "bdev_wait_for_examine", 00:04:45.982 "bdev_set_options", 00:04:45.982 "scsi_get_devices", 00:04:45.982 "thread_set_cpumask", 00:04:45.982 "framework_get_scheduler", 00:04:45.982 "framework_set_scheduler", 00:04:45.982 "framework_get_reactors", 00:04:45.982 "thread_get_io_channels", 00:04:45.982 "thread_get_pollers", 00:04:45.982 "thread_get_stats", 00:04:45.982 "framework_monitor_context_switch", 00:04:45.982 "spdk_kill_instance", 00:04:45.982 "log_enable_timestamps", 00:04:45.982 "log_get_flags", 00:04:45.982 "log_clear_flag", 00:04:45.982 "log_set_flag", 00:04:45.982 "log_get_level", 00:04:45.982 "log_set_level", 00:04:45.982 "log_get_print_level", 00:04:45.982 "log_set_print_level", 00:04:45.982 "framework_enable_cpumask_locks", 00:04:45.982 "framework_disable_cpumask_locks", 00:04:45.982 "framework_wait_init", 00:04:45.982 "framework_start_init", 00:04:45.982 "virtio_blk_create_transport", 00:04:45.982 "virtio_blk_get_transports", 00:04:45.982 "vhost_controller_set_coalescing", 00:04:45.982 "vhost_get_controllers", 00:04:45.982 "vhost_delete_controller", 00:04:45.982 "vhost_create_blk_controller", 00:04:45.982 "vhost_scsi_controller_remove_target", 00:04:45.982 "vhost_scsi_controller_add_target", 00:04:45.982 "vhost_start_scsi_controller", 00:04:45.982 "vhost_create_scsi_controller", 00:04:45.982 "ublk_recover_disk", 00:04:45.982 "ublk_get_disks", 00:04:45.982 "ublk_stop_disk", 00:04:45.982 "ublk_start_disk", 00:04:45.982 "ublk_destroy_target", 00:04:45.982 "ublk_create_target", 00:04:45.982 "nbd_get_disks", 00:04:45.982 "nbd_stop_disk", 00:04:45.982 "nbd_start_disk", 00:04:45.982 "env_dpdk_get_mem_stats", 00:04:45.982 "nvmf_subsystem_get_listeners", 00:04:45.982 "nvmf_subsystem_get_qpairs", 00:04:45.982 "nvmf_subsystem_get_controllers", 00:04:45.982 "nvmf_get_stats", 00:04:45.982 "nvmf_get_transports", 00:04:45.982 "nvmf_create_transport", 00:04:45.982 "nvmf_get_targets", 00:04:45.982 "nvmf_delete_target", 00:04:45.982 "nvmf_create_target", 00:04:45.982 "nvmf_subsystem_allow_any_host", 00:04:45.982 "nvmf_subsystem_remove_host", 00:04:45.982 "nvmf_subsystem_add_host", 00:04:45.982 "nvmf_ns_remove_host", 00:04:45.982 "nvmf_ns_add_host", 00:04:45.982 "nvmf_subsystem_remove_ns", 00:04:45.982 "nvmf_subsystem_add_ns", 00:04:45.982 "nvmf_subsystem_listener_set_ana_state", 00:04:45.982 "nvmf_discovery_get_referrals", 00:04:45.982 "nvmf_discovery_remove_referral", 00:04:45.982 "nvmf_discovery_add_referral", 00:04:45.982 "nvmf_subsystem_remove_listener", 00:04:45.982 "nvmf_subsystem_add_listener", 00:04:45.982 "nvmf_delete_subsystem", 00:04:45.982 "nvmf_create_subsystem", 00:04:45.982 "nvmf_get_subsystems", 00:04:45.982 "nvmf_set_crdt", 00:04:45.982 "nvmf_set_config", 00:04:45.982 "nvmf_set_max_subsystems", 00:04:45.982 "iscsi_get_histogram", 00:04:45.982 "iscsi_enable_histogram", 00:04:45.982 "iscsi_set_options", 00:04:45.982 "iscsi_get_auth_groups", 00:04:45.982 "iscsi_auth_group_remove_secret", 00:04:45.982 "iscsi_auth_group_add_secret", 00:04:45.982 "iscsi_delete_auth_group", 00:04:45.982 "iscsi_create_auth_group", 00:04:45.982 "iscsi_set_discovery_auth", 00:04:45.982 "iscsi_get_options", 00:04:45.982 "iscsi_target_node_request_logout", 00:04:45.982 "iscsi_target_node_set_redirect", 00:04:45.982 "iscsi_target_node_set_auth", 00:04:45.982 "iscsi_target_node_add_lun", 00:04:45.982 "iscsi_get_stats", 00:04:45.982 "iscsi_get_connections", 00:04:45.982 "iscsi_portal_group_set_auth", 00:04:45.982 "iscsi_start_portal_group", 00:04:45.982 "iscsi_delete_portal_group", 00:04:45.982 "iscsi_create_portal_group", 00:04:45.982 "iscsi_get_portal_groups", 00:04:45.982 "iscsi_delete_target_node", 00:04:45.982 "iscsi_target_node_remove_pg_ig_maps", 00:04:45.982 "iscsi_target_node_add_pg_ig_maps", 00:04:45.982 "iscsi_create_target_node", 00:04:45.982 "iscsi_get_target_nodes", 00:04:45.983 "iscsi_delete_initiator_group", 00:04:45.983 "iscsi_initiator_group_remove_initiators", 00:04:45.983 "iscsi_initiator_group_add_initiators", 00:04:45.983 "iscsi_create_initiator_group", 00:04:45.983 "iscsi_get_initiator_groups", 00:04:45.983 "keyring_file_remove_key", 00:04:45.983 "keyring_file_add_key", 00:04:45.983 "vfu_virtio_create_scsi_endpoint", 00:04:45.983 "vfu_virtio_scsi_remove_target", 00:04:45.983 "vfu_virtio_scsi_add_target", 00:04:45.983 "vfu_virtio_create_blk_endpoint", 00:04:45.983 "vfu_virtio_delete_endpoint", 00:04:45.983 "iaa_scan_accel_module", 00:04:45.983 "dsa_scan_accel_module", 00:04:45.983 "ioat_scan_accel_module", 00:04:45.983 "accel_error_inject_error", 00:04:45.983 "bdev_iscsi_delete", 00:04:45.983 "bdev_iscsi_create", 00:04:45.983 "bdev_iscsi_set_options", 00:04:45.983 "bdev_virtio_attach_controller", 00:04:45.983 "bdev_virtio_scsi_get_devices", 00:04:45.983 "bdev_virtio_detach_controller", 00:04:45.983 "bdev_virtio_blk_set_hotplug", 00:04:45.983 "bdev_ftl_set_property", 00:04:45.983 "bdev_ftl_get_properties", 00:04:45.983 "bdev_ftl_get_stats", 00:04:45.983 "bdev_ftl_unmap", 00:04:45.983 "bdev_ftl_unload", 00:04:45.983 "bdev_ftl_delete", 00:04:45.983 "bdev_ftl_load", 00:04:45.983 "bdev_ftl_create", 00:04:45.983 "bdev_aio_delete", 00:04:45.983 "bdev_aio_rescan", 00:04:45.983 "bdev_aio_create", 00:04:45.983 "blobfs_create", 00:04:45.983 "blobfs_detect", 00:04:45.983 "blobfs_set_cache_size", 00:04:45.983 "bdev_zone_block_delete", 00:04:45.983 "bdev_zone_block_create", 00:04:45.983 "bdev_delay_delete", 00:04:45.983 "bdev_delay_create", 00:04:45.983 "bdev_delay_update_latency", 00:04:45.983 "bdev_split_delete", 00:04:45.983 "bdev_split_create", 00:04:45.983 "bdev_error_inject_error", 00:04:45.983 "bdev_error_delete", 00:04:45.983 "bdev_error_create", 00:04:45.983 "bdev_raid_set_options", 00:04:45.983 "bdev_raid_remove_base_bdev", 00:04:45.983 "bdev_raid_add_base_bdev", 00:04:45.983 "bdev_raid_delete", 00:04:45.983 "bdev_raid_create", 00:04:45.983 "bdev_raid_get_bdevs", 00:04:45.983 "bdev_lvol_check_shallow_copy", 00:04:45.983 "bdev_lvol_start_shallow_copy", 00:04:45.983 "bdev_lvol_grow_lvstore", 00:04:45.983 "bdev_lvol_get_lvols", 00:04:45.983 "bdev_lvol_get_lvstores", 00:04:45.983 "bdev_lvol_delete", 00:04:45.983 "bdev_lvol_set_read_only", 00:04:45.983 "bdev_lvol_resize", 00:04:45.983 "bdev_lvol_decouple_parent", 00:04:45.983 "bdev_lvol_inflate", 00:04:45.983 "bdev_lvol_rename", 00:04:45.983 "bdev_lvol_clone_bdev", 00:04:45.983 "bdev_lvol_clone", 00:04:45.983 "bdev_lvol_snapshot", 00:04:45.983 "bdev_lvol_create", 00:04:45.983 "bdev_lvol_delete_lvstore", 00:04:45.983 "bdev_lvol_rename_lvstore", 00:04:45.983 "bdev_lvol_create_lvstore", 00:04:45.983 "bdev_passthru_delete", 00:04:45.983 "bdev_passthru_create", 00:04:45.983 "bdev_nvme_cuse_unregister", 00:04:45.983 "bdev_nvme_cuse_register", 00:04:45.983 "bdev_opal_new_user", 00:04:45.983 "bdev_opal_set_lock_state", 00:04:45.983 "bdev_opal_delete", 00:04:45.983 "bdev_opal_get_info", 00:04:45.983 "bdev_opal_create", 00:04:45.983 "bdev_nvme_opal_revert", 00:04:45.983 "bdev_nvme_opal_init", 00:04:45.983 "bdev_nvme_send_cmd", 00:04:45.983 "bdev_nvme_get_path_iostat", 00:04:45.983 "bdev_nvme_get_mdns_discovery_info", 00:04:45.983 "bdev_nvme_stop_mdns_discovery", 00:04:45.983 "bdev_nvme_start_mdns_discovery", 00:04:45.983 "bdev_nvme_set_multipath_policy", 00:04:45.983 "bdev_nvme_set_preferred_path", 00:04:45.983 "bdev_nvme_get_io_paths", 00:04:45.983 "bdev_nvme_remove_error_injection", 00:04:45.983 "bdev_nvme_add_error_injection", 00:04:45.983 "bdev_nvme_get_discovery_info", 00:04:45.983 "bdev_nvme_stop_discovery", 00:04:45.983 "bdev_nvme_start_discovery", 00:04:45.983 "bdev_nvme_get_controller_health_info", 00:04:45.983 "bdev_nvme_disable_controller", 00:04:45.983 "bdev_nvme_enable_controller", 00:04:45.983 "bdev_nvme_reset_controller", 00:04:45.983 "bdev_nvme_get_transport_statistics", 00:04:45.983 "bdev_nvme_apply_firmware", 00:04:45.983 "bdev_nvme_detach_controller", 00:04:45.983 "bdev_nvme_get_controllers", 00:04:45.983 "bdev_nvme_attach_controller", 00:04:45.983 "bdev_nvme_set_hotplug", 00:04:45.983 "bdev_nvme_set_options", 00:04:45.983 "bdev_null_resize", 00:04:45.983 "bdev_null_delete", 00:04:45.983 "bdev_null_create", 00:04:45.983 "bdev_malloc_delete", 00:04:45.983 "bdev_malloc_create" 00:04:45.983 ] 00:04:45.983 11:41:12 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:45.983 11:41:12 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.983 11:41:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.983 11:41:12 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:45.983 11:41:12 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3615513 00:04:45.983 11:41:12 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3615513 ']' 00:04:45.983 11:41:12 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3615513 00:04:45.983 11:41:12 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:04:45.983 11:41:12 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:45.983 11:41:12 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3615513 00:04:45.983 11:41:13 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:45.983 11:41:13 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:45.983 11:41:13 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3615513' 00:04:45.983 killing process with pid 3615513 00:04:45.983 11:41:13 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3615513 00:04:45.983 11:41:13 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3615513 00:04:46.241 00:04:46.241 real 0m1.538s 00:04:46.241 user 0m2.811s 00:04:46.241 sys 0m0.506s 00:04:46.241 11:41:13 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.241 11:41:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:46.241 ************************************ 00:04:46.241 END TEST spdkcli_tcp 00:04:46.241 ************************************ 00:04:46.499 11:41:13 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:46.499 11:41:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.499 11:41:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.499 11:41:13 -- common/autotest_common.sh@10 -- # set +x 00:04:46.499 ************************************ 00:04:46.499 START TEST dpdk_mem_utility 00:04:46.499 ************************************ 00:04:46.499 11:41:13 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:46.499 * Looking for test storage... 00:04:46.499 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:04:46.499 11:41:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:46.499 11:41:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3615850 00:04:46.499 11:41:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3615850 00:04:46.499 11:41:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.499 11:41:13 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3615850 ']' 00:04:46.499 11:41:13 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.499 11:41:13 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:46.499 11:41:13 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.499 11:41:13 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:46.499 11:41:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.499 [2024-05-14 11:41:13.515923] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:04:46.499 [2024-05-14 11:41:13.516000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3615850 ] 00:04:46.499 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.499 [2024-05-14 11:41:13.585161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.757 [2024-05-14 11:41:13.660387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.324 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:47.324 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:04:47.324 11:41:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:47.324 11:41:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:47.324 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.324 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:47.324 { 00:04:47.324 "filename": "/tmp/spdk_mem_dump.txt" 00:04:47.324 } 00:04:47.324 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.324 11:41:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:47.324 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:47.324 1 heaps totaling size 814.000000 MiB 00:04:47.324 size: 814.000000 MiB heap id: 0 00:04:47.324 end heaps---------- 00:04:47.324 8 mempools totaling size 598.116089 MiB 00:04:47.324 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:47.324 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:47.324 size: 84.521057 MiB name: bdev_io_3615850 00:04:47.324 size: 51.011292 MiB name: evtpool_3615850 00:04:47.324 size: 50.003479 MiB name: msgpool_3615850 00:04:47.324 size: 21.763794 MiB name: PDU_Pool 00:04:47.324 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:47.324 size: 0.026123 MiB name: Session_Pool 00:04:47.324 end mempools------- 00:04:47.324 6 memzones totaling size 4.142822 MiB 00:04:47.324 size: 1.000366 MiB name: RG_ring_0_3615850 00:04:47.324 size: 1.000366 MiB name: RG_ring_1_3615850 00:04:47.324 size: 1.000366 MiB name: RG_ring_4_3615850 00:04:47.324 size: 1.000366 MiB name: RG_ring_5_3615850 00:04:47.324 size: 0.125366 MiB name: RG_ring_2_3615850 00:04:47.324 size: 0.015991 MiB name: RG_ring_3_3615850 00:04:47.324 end memzones------- 00:04:47.324 11:41:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:47.584 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:47.584 list of free elements. size: 12.519348 MiB 00:04:47.584 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:47.584 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:47.584 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:47.584 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:47.584 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:47.584 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:47.584 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:47.584 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:47.584 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:47.584 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:47.584 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:47.584 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:47.584 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:47.584 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:47.584 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:47.584 list of standard malloc elements. size: 199.218079 MiB 00:04:47.584 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:47.584 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:47.584 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:47.584 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:47.584 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:47.584 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:47.584 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:47.584 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:47.584 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:47.584 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:47.584 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:47.584 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:47.584 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:47.584 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:47.584 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:47.584 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:47.584 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:47.584 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:47.584 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:47.584 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:47.584 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:47.584 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:47.584 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:47.584 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:47.584 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:47.584 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:47.584 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:47.584 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:47.584 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:47.584 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:47.584 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:47.584 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:47.584 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:47.584 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:47.584 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:47.584 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:47.584 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:47.584 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:47.584 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:47.584 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:47.584 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:47.584 list of memzone associated elements. size: 602.262573 MiB 00:04:47.584 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:47.584 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:47.584 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:47.584 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:47.584 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:47.584 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3615850_0 00:04:47.584 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:47.584 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3615850_0 00:04:47.584 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:47.584 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3615850_0 00:04:47.584 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:47.584 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:47.584 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:47.584 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:47.584 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:47.584 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3615850 00:04:47.584 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:47.584 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3615850 00:04:47.584 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:47.584 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3615850 00:04:47.584 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:47.584 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:47.584 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:47.584 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:47.584 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:47.584 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:47.584 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:47.584 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:47.584 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:47.584 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3615850 00:04:47.584 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:47.584 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3615850 00:04:47.584 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:47.584 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3615850 00:04:47.584 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:47.584 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3615850 00:04:47.584 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:47.584 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3615850 00:04:47.584 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:47.584 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:47.584 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:47.584 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:47.584 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:47.584 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:47.584 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:47.584 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3615850 00:04:47.584 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:47.584 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:47.584 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:47.584 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:47.584 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:47.584 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3615850 00:04:47.584 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:47.584 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:47.584 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:47.584 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3615850 00:04:47.584 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:47.584 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3615850 00:04:47.584 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:47.584 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:47.584 11:41:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:47.584 11:41:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3615850 00:04:47.584 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3615850 ']' 00:04:47.584 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3615850 00:04:47.584 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:04:47.584 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:47.584 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3615850 00:04:47.584 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:47.584 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:47.584 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3615850' 00:04:47.584 killing process with pid 3615850 00:04:47.584 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3615850 00:04:47.584 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3615850 00:04:47.843 00:04:47.843 real 0m1.421s 00:04:47.843 user 0m1.455s 00:04:47.843 sys 0m0.453s 00:04:47.843 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.843 11:41:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:47.843 ************************************ 00:04:47.843 END TEST dpdk_mem_utility 00:04:47.843 ************************************ 00:04:47.843 11:41:14 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:47.843 11:41:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.843 11:41:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.843 11:41:14 -- common/autotest_common.sh@10 -- # set +x 00:04:47.843 ************************************ 00:04:47.843 START TEST event 00:04:47.843 ************************************ 00:04:47.843 11:41:14 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:04:48.102 * Looking for test storage... 00:04:48.102 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:04:48.102 11:41:14 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:48.102 11:41:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:48.102 11:41:14 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.102 11:41:14 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:04:48.102 11:41:14 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.102 11:41:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.102 ************************************ 00:04:48.102 START TEST event_perf 00:04:48.102 ************************************ 00:04:48.102 11:41:15 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.102 Running I/O for 1 seconds...[2024-05-14 11:41:15.052758] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:04:48.102 [2024-05-14 11:41:15.052843] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3616176 ] 00:04:48.102 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.102 [2024-05-14 11:41:15.126351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.361 [2024-05-14 11:41:15.201558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.361 [2024-05-14 11:41:15.201656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.361 [2024-05-14 11:41:15.201739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.361 [2024-05-14 11:41:15.201741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.296 Running I/O for 1 seconds... 00:04:49.296 lcore 0: 190552 00:04:49.296 lcore 1: 190550 00:04:49.296 lcore 2: 190551 00:04:49.296 lcore 3: 190551 00:04:49.296 done. 00:04:49.296 00:04:49.296 real 0m1.232s 00:04:49.296 user 0m4.126s 00:04:49.296 sys 0m0.103s 00:04:49.296 11:41:16 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.296 11:41:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.296 ************************************ 00:04:49.296 END TEST event_perf 00:04:49.296 ************************************ 00:04:49.296 11:41:16 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:49.296 11:41:16 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:49.296 11:41:16 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.296 11:41:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.296 ************************************ 00:04:49.296 START TEST event_reactor 00:04:49.296 ************************************ 00:04:49.296 11:41:16 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:49.296 [2024-05-14 11:41:16.367601] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:04:49.296 [2024-05-14 11:41:16.367682] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3616469 ] 00:04:49.555 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.555 [2024-05-14 11:41:16.438120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.555 [2024-05-14 11:41:16.509462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.490 test_start 00:04:50.490 oneshot 00:04:50.490 tick 100 00:04:50.490 tick 100 00:04:50.490 tick 250 00:04:50.490 tick 100 00:04:50.490 tick 100 00:04:50.490 tick 100 00:04:50.490 tick 250 00:04:50.490 tick 500 00:04:50.490 tick 100 00:04:50.490 tick 100 00:04:50.490 tick 250 00:04:50.490 tick 100 00:04:50.490 tick 100 00:04:50.490 test_end 00:04:50.490 00:04:50.490 real 0m1.222s 00:04:50.490 user 0m1.138s 00:04:50.490 sys 0m0.079s 00:04:50.490 11:41:17 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.490 11:41:17 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:50.490 ************************************ 00:04:50.490 END TEST event_reactor 00:04:50.490 ************************************ 00:04:50.748 11:41:17 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:50.748 11:41:17 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:50.748 11:41:17 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.748 11:41:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.748 ************************************ 00:04:50.748 START TEST event_reactor_perf 00:04:50.748 ************************************ 00:04:50.748 11:41:17 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:50.748 [2024-05-14 11:41:17.673126] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:04:50.748 [2024-05-14 11:41:17.673207] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3616751 ] 00:04:50.748 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.748 [2024-05-14 11:41:17.743706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.748 [2024-05-14 11:41:17.813768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.238 test_start 00:04:52.238 test_end 00:04:52.238 Performance: 947155 events per second 00:04:52.238 00:04:52.238 real 0m1.224s 00:04:52.238 user 0m1.131s 00:04:52.238 sys 0m0.088s 00:04:52.238 11:41:18 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:52.238 11:41:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.238 ************************************ 00:04:52.238 END TEST event_reactor_perf 00:04:52.238 ************************************ 00:04:52.238 11:41:18 event -- event/event.sh@49 -- # uname -s 00:04:52.238 11:41:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:52.238 11:41:18 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:52.238 11:41:18 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:52.238 11:41:18 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.238 11:41:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.238 ************************************ 00:04:52.238 START TEST event_scheduler 00:04:52.238 ************************************ 00:04:52.238 11:41:18 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:52.238 * Looking for test storage... 00:04:52.238 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:04:52.238 11:41:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:52.238 11:41:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3617059 00:04:52.238 11:41:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.238 11:41:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:52.238 11:41:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3617059 00:04:52.238 11:41:19 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3617059 ']' 00:04:52.238 11:41:19 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.238 11:41:19 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:52.238 11:41:19 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.238 11:41:19 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:52.238 11:41:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.238 [2024-05-14 11:41:19.082657] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:04:52.238 [2024-05-14 11:41:19.082745] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3617059 ] 00:04:52.238 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.238 [2024-05-14 11:41:19.149891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:52.238 [2024-05-14 11:41:19.225936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.238 [2024-05-14 11:41:19.226021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.238 [2024-05-14 11:41:19.226102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:52.238 [2024-05-14 11:41:19.226105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.173 11:41:19 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:53.173 11:41:19 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:04:53.173 11:41:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:53.173 11:41:19 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.173 11:41:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.173 POWER: Env isn't set yet! 00:04:53.173 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:53.173 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.173 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.173 POWER: Attempting to initialise PSTAT power management... 00:04:53.173 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:53.173 POWER: Initialized successfully for lcore 0 power management 00:04:53.173 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:53.173 POWER: Initialized successfully for lcore 1 power management 00:04:53.173 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:53.173 POWER: Initialized successfully for lcore 2 power management 00:04:53.173 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:53.173 POWER: Initialized successfully for lcore 3 power management 00:04:53.173 11:41:19 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.173 11:41:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:53.173 11:41:19 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.173 11:41:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.173 [2024-05-14 11:41:20.029868] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:53.173 11:41:20 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.173 11:41:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:53.173 11:41:20 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.173 11:41:20 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.173 11:41:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.173 ************************************ 00:04:53.173 START TEST scheduler_create_thread 00:04:53.173 ************************************ 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.173 2 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.173 3 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.173 4 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.173 5 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.173 6 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.173 7 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.173 8 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.173 9 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.173 10 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.173 11:41:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.105 11:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.105 11:41:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:54.105 11:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.105 11:41:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.486 11:41:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.486 11:41:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:55.486 11:41:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:55.486 11:41:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.486 11:41:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.420 11:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.420 00:04:56.420 real 0m3.382s 00:04:56.420 user 0m0.010s 00:04:56.420 sys 0m0.007s 00:04:56.420 11:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.420 11:41:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.420 ************************************ 00:04:56.420 END TEST scheduler_create_thread 00:04:56.420 ************************************ 00:04:56.420 11:41:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:56.420 11:41:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3617059 00:04:56.420 11:41:23 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3617059 ']' 00:04:56.420 11:41:23 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3617059 00:04:56.420 11:41:23 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:04:56.420 11:41:23 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:56.420 11:41:23 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3617059 00:04:56.678 11:41:23 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:56.678 11:41:23 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:56.678 11:41:23 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3617059' 00:04:56.678 killing process with pid 3617059 00:04:56.678 11:41:23 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3617059 00:04:56.678 11:41:23 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3617059 00:04:56.935 [2024-05-14 11:41:23.837897] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:56.935 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:04:56.935 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:56.935 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:04:56.935 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:56.935 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:04:56.935 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:56.935 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:04:56.935 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:57.193 00:04:57.194 real 0m5.095s 00:04:57.194 user 0m10.540s 00:04:57.194 sys 0m0.405s 00:04:57.194 11:41:24 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:57.194 11:41:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.194 ************************************ 00:04:57.194 END TEST event_scheduler 00:04:57.194 ************************************ 00:04:57.194 11:41:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:57.194 11:41:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:57.194 11:41:24 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:57.194 11:41:24 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.194 11:41:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.194 ************************************ 00:04:57.194 START TEST app_repeat 00:04:57.194 ************************************ 00:04:57.194 11:41:24 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:04:57.194 11:41:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.194 11:41:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.194 11:41:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:57.194 11:41:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.194 11:41:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:57.194 11:41:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:57.194 11:41:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:57.194 11:41:24 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:57.194 11:41:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3617921 00:04:57.194 11:41:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.194 11:41:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3617921' 00:04:57.194 Process app_repeat pid: 3617921 00:04:57.194 11:41:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.194 11:41:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:57.194 spdk_app_start Round 0 00:04:57.194 11:41:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3617921 /var/tmp/spdk-nbd.sock 00:04:57.194 11:41:24 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3617921 ']' 00:04:57.194 11:41:24 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.194 11:41:24 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:57.194 11:41:24 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.194 11:41:24 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:57.194 11:41:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.194 [2024-05-14 11:41:24.163152] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:04:57.194 [2024-05-14 11:41:24.163213] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3617921 ] 00:04:57.194 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.194 [2024-05-14 11:41:24.232711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.452 [2024-05-14 11:41:24.305482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.452 [2024-05-14 11:41:24.305484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.019 11:41:24 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:58.019 11:41:24 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:58.019 11:41:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.277 Malloc0 00:04:58.277 11:41:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.277 Malloc1 00:04:58.277 11:41:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.277 11:41:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.277 11:41:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.277 11:41:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.277 11:41:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.277 11:41:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.277 11:41:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.277 11:41:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.277 11:41:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.277 11:41:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.277 11:41:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.277 11:41:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.277 11:41:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:58.277 11:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.277 11:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.277 11:41:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.535 /dev/nbd0 00:04:58.535 11:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.535 11:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.535 11:41:25 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:58.535 11:41:25 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:58.535 11:41:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:58.535 11:41:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:58.535 11:41:25 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:58.535 11:41:25 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:58.535 11:41:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:58.535 11:41:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:58.535 11:41:25 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.535 1+0 records in 00:04:58.535 1+0 records out 00:04:58.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248797 s, 16.5 MB/s 00:04:58.535 11:41:25 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:58.535 11:41:25 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:58.535 11:41:25 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:58.535 11:41:25 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:58.535 11:41:25 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:58.535 11:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.535 11:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.535 11:41:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.793 /dev/nbd1 00:04:58.793 11:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.793 11:41:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.793 11:41:25 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:58.793 11:41:25 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:58.793 11:41:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:58.793 11:41:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:58.793 11:41:25 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:58.793 11:41:25 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:58.793 11:41:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:58.793 11:41:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:58.793 11:41:25 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.793 1+0 records in 00:04:58.793 1+0 records out 00:04:58.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238303 s, 17.2 MB/s 00:04:58.793 11:41:25 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:58.793 11:41:25 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:58.793 11:41:25 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:04:58.793 11:41:25 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:58.793 11:41:25 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:58.793 11:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.793 11:41:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.793 11:41:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.793 11:41:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.793 11:41:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.051 11:41:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.051 { 00:04:59.051 "nbd_device": "/dev/nbd0", 00:04:59.051 "bdev_name": "Malloc0" 00:04:59.051 }, 00:04:59.051 { 00:04:59.051 "nbd_device": "/dev/nbd1", 00:04:59.051 "bdev_name": "Malloc1" 00:04:59.051 } 00:04:59.051 ]' 00:04:59.051 11:41:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.051 { 00:04:59.051 "nbd_device": "/dev/nbd0", 00:04:59.051 "bdev_name": "Malloc0" 00:04:59.051 }, 00:04:59.051 { 00:04:59.051 "nbd_device": "/dev/nbd1", 00:04:59.051 "bdev_name": "Malloc1" 00:04:59.051 } 00:04:59.051 ]' 00:04:59.051 11:41:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.051 11:41:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.051 /dev/nbd1' 00:04:59.051 11:41:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.051 /dev/nbd1' 00:04:59.051 11:41:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.051 11:41:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.051 11:41:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.051 11:41:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.051 11:41:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.052 11:41:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.052 11:41:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.052 11:41:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.052 11:41:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.052 11:41:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.052 11:41:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.052 11:41:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.052 256+0 records in 00:04:59.052 256+0 records out 00:04:59.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111006 s, 94.5 MB/s 00:04:59.052 11:41:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.052 11:41:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.052 256+0 records in 00:04:59.052 256+0 records out 00:04:59.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199103 s, 52.7 MB/s 00:04:59.052 11:41:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.052 11:41:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.052 256+0 records in 00:04:59.052 256+0 records out 00:04:59.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216076 s, 48.5 MB/s 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.052 11:41:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.310 11:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.310 11:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.310 11:41:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.310 11:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.310 11:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.310 11:41:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.310 11:41:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.310 11:41:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.310 11:41:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.310 11:41:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.568 11:41:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.568 11:41:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.827 11:41:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:00.086 [2024-05-14 11:41:27.018405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.086 [2024-05-14 11:41:27.083811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.086 [2024-05-14 11:41:27.083813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.086 [2024-05-14 11:41:27.124847] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:00.086 [2024-05-14 11:41:27.124894] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.367 11:41:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.367 11:41:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:03.367 spdk_app_start Round 1 00:05:03.367 11:41:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3617921 /var/tmp/spdk-nbd.sock 00:05:03.367 11:41:29 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3617921 ']' 00:05:03.367 11:41:29 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.367 11:41:29 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:03.367 11:41:29 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.367 11:41:29 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:03.367 11:41:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.367 11:41:30 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:03.367 11:41:30 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:03.367 11:41:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.367 Malloc0 00:05:03.367 11:41:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.367 Malloc1 00:05:03.367 11:41:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.367 11:41:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.367 11:41:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.367 11:41:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:03.367 11:41:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.367 11:41:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:03.367 11:41:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.367 11:41:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.367 11:41:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.367 11:41:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:03.367 11:41:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.367 11:41:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:03.367 11:41:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:03.367 11:41:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:03.367 11:41:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.367 11:41:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.624 /dev/nbd0 00:05:03.624 11:41:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.624 11:41:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.624 11:41:30 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:03.624 11:41:30 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:03.624 11:41:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:03.624 11:41:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:03.624 11:41:30 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:03.624 11:41:30 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:03.624 11:41:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:03.624 11:41:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:03.624 11:41:30 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.624 1+0 records in 00:05:03.624 1+0 records out 00:05:03.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245101 s, 16.7 MB/s 00:05:03.625 11:41:30 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:03.625 11:41:30 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:03.625 11:41:30 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:03.625 11:41:30 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:03.625 11:41:30 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:03.625 11:41:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.625 11:41:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.625 11:41:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.883 /dev/nbd1 00:05:03.883 11:41:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.883 11:41:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.883 11:41:30 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:03.883 11:41:30 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:03.883 11:41:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:03.883 11:41:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:03.883 11:41:30 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:03.883 11:41:30 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:03.883 11:41:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:03.883 11:41:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:03.883 11:41:30 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.883 1+0 records in 00:05:03.883 1+0 records out 00:05:03.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254739 s, 16.1 MB/s 00:05:03.883 11:41:30 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:03.883 11:41:30 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:03.883 11:41:30 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:03.883 11:41:30 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:03.883 11:41:30 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:03.883 11:41:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.883 11:41:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.883 11:41:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.883 11:41:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.883 11:41:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.883 11:41:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:03.883 { 00:05:03.883 "nbd_device": "/dev/nbd0", 00:05:03.883 "bdev_name": "Malloc0" 00:05:03.883 }, 00:05:03.883 { 00:05:03.883 "nbd_device": "/dev/nbd1", 00:05:03.883 "bdev_name": "Malloc1" 00:05:03.883 } 00:05:03.883 ]' 00:05:03.883 11:41:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:03.883 { 00:05:03.883 "nbd_device": "/dev/nbd0", 00:05:03.883 "bdev_name": "Malloc0" 00:05:03.883 }, 00:05:03.883 { 00:05:03.883 "nbd_device": "/dev/nbd1", 00:05:03.883 "bdev_name": "Malloc1" 00:05:03.883 } 00:05:03.883 ]' 00:05:03.883 11:41:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.142 11:41:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.142 /dev/nbd1' 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.142 /dev/nbd1' 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.142 256+0 records in 00:05:04.142 256+0 records out 00:05:04.142 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114497 s, 91.6 MB/s 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.142 256+0 records in 00:05:04.142 256+0 records out 00:05:04.142 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199591 s, 52.5 MB/s 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.142 256+0 records in 00:05:04.142 256+0 records out 00:05:04.142 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206681 s, 50.7 MB/s 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.142 11:41:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.400 11:41:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.659 11:41:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:04.659 11:41:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:04.659 11:41:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.659 11:41:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.659 11:41:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.659 11:41:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.659 11:41:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:04.659 11:41:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.659 11:41:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.659 11:41:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.659 11:41:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.659 11:41:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.659 11:41:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.918 11:41:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:05.176 [2024-05-14 11:41:32.059108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.176 [2024-05-14 11:41:32.125585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.176 [2024-05-14 11:41:32.125587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.176 [2024-05-14 11:41:32.167516] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:05.176 [2024-05-14 11:41:32.167564] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:08.487 11:41:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:08.487 11:41:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:08.487 spdk_app_start Round 2 00:05:08.487 11:41:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3617921 /var/tmp/spdk-nbd.sock 00:05:08.487 11:41:34 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3617921 ']' 00:05:08.487 11:41:34 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.487 11:41:34 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:08.487 11:41:34 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.487 11:41:34 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:08.487 11:41:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.487 11:41:35 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:08.487 11:41:35 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:08.487 11:41:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.487 Malloc0 00:05:08.487 11:41:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.487 Malloc1 00:05:08.487 11:41:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.487 11:41:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.487 11:41:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.487 11:41:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:08.487 11:41:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.487 11:41:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:08.487 11:41:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.487 11:41:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.487 11:41:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.487 11:41:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:08.487 11:41:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.487 11:41:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:08.487 11:41:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:08.487 11:41:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:08.487 11:41:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.487 11:41:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:08.487 /dev/nbd0 00:05:08.745 11:41:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:08.745 11:41:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.745 1+0 records in 00:05:08.745 1+0 records out 00:05:08.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208294 s, 19.7 MB/s 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:08.745 11:41:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.745 11:41:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.745 11:41:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:08.745 /dev/nbd1 00:05:08.745 11:41:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:08.745 11:41:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.745 1+0 records in 00:05:08.745 1+0 records out 00:05:08.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390602 s, 10.5 MB/s 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:08.745 11:41:35 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:08.745 11:41:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.745 11:41:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.745 11:41:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.745 11:41:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.745 11:41:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.003 11:41:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.003 { 00:05:09.003 "nbd_device": "/dev/nbd0", 00:05:09.003 "bdev_name": "Malloc0" 00:05:09.003 }, 00:05:09.003 { 00:05:09.003 "nbd_device": "/dev/nbd1", 00:05:09.003 "bdev_name": "Malloc1" 00:05:09.003 } 00:05:09.003 ]' 00:05:09.003 11:41:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.003 { 00:05:09.003 "nbd_device": "/dev/nbd0", 00:05:09.003 "bdev_name": "Malloc0" 00:05:09.003 }, 00:05:09.003 { 00:05:09.003 "nbd_device": "/dev/nbd1", 00:05:09.003 "bdev_name": "Malloc1" 00:05:09.003 } 00:05:09.003 ]' 00:05:09.003 11:41:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.003 /dev/nbd1' 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.003 /dev/nbd1' 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.003 256+0 records in 00:05:09.003 256+0 records out 00:05:09.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108245 s, 96.9 MB/s 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.003 11:41:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.003 256+0 records in 00:05:09.003 256+0 records out 00:05:09.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198454 s, 52.8 MB/s 00:05:09.004 11:41:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.004 11:41:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.261 256+0 records in 00:05:09.261 256+0 records out 00:05:09.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213879 s, 49.0 MB/s 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.261 11:41:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.519 11:41:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.519 11:41:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.519 11:41:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.519 11:41:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.519 11:41:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.519 11:41:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.519 11:41:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.519 11:41:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.519 11:41:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.519 11:41:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.519 11:41:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.778 11:41:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.778 11:41:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.778 11:41:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.778 11:41:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:09.778 11:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:09.778 11:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.778 11:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:09.778 11:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:09.778 11:41:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:09.778 11:41:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:09.778 11:41:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:09.778 11:41:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:09.778 11:41:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.036 11:41:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:10.036 [2024-05-14 11:41:37.125632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.294 [2024-05-14 11:41:37.195821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.294 [2024-05-14 11:41:37.195823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.294 [2024-05-14 11:41:37.236127] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:10.294 [2024-05-14 11:41:37.236172] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.571 11:41:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3617921 /var/tmp/spdk-nbd.sock 00:05:13.571 11:41:39 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3617921 ']' 00:05:13.571 11:41:39 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.571 11:41:39 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:13.571 11:41:39 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.571 11:41:39 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:13.571 11:41:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.571 11:41:40 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:13.571 11:41:40 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:13.571 11:41:40 event.app_repeat -- event/event.sh@39 -- # killprocess 3617921 00:05:13.571 11:41:40 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3617921 ']' 00:05:13.571 11:41:40 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3617921 00:05:13.571 11:41:40 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:05:13.571 11:41:40 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:13.571 11:41:40 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3617921 00:05:13.571 11:41:40 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:13.571 11:41:40 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:13.571 11:41:40 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3617921' 00:05:13.571 killing process with pid 3617921 00:05:13.571 11:41:40 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3617921 00:05:13.571 11:41:40 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3617921 00:05:13.571 spdk_app_start is called in Round 0. 00:05:13.571 Shutdown signal received, stop current app iteration 00:05:13.571 Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 reinitialization... 00:05:13.571 spdk_app_start is called in Round 1. 00:05:13.571 Shutdown signal received, stop current app iteration 00:05:13.571 Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 reinitialization... 00:05:13.571 spdk_app_start is called in Round 2. 00:05:13.571 Shutdown signal received, stop current app iteration 00:05:13.571 Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 reinitialization... 00:05:13.571 spdk_app_start is called in Round 3. 00:05:13.571 Shutdown signal received, stop current app iteration 00:05:13.571 11:41:40 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:13.571 11:41:40 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:13.571 00:05:13.571 real 0m16.196s 00:05:13.571 user 0m34.194s 00:05:13.571 sys 0m3.171s 00:05:13.571 11:41:40 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.571 11:41:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.571 ************************************ 00:05:13.571 END TEST app_repeat 00:05:13.571 ************************************ 00:05:13.571 11:41:40 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:13.571 11:41:40 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:13.571 11:41:40 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.571 11:41:40 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.571 11:41:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.571 ************************************ 00:05:13.571 START TEST cpu_locks 00:05:13.571 ************************************ 00:05:13.572 11:41:40 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:13.572 * Looking for test storage... 00:05:13.572 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:13.572 11:41:40 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:13.572 11:41:40 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:13.572 11:41:40 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:13.572 11:41:40 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:13.572 11:41:40 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.572 11:41:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.572 11:41:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.572 ************************************ 00:05:13.572 START TEST default_locks 00:05:13.572 ************************************ 00:05:13.572 11:41:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:05:13.572 11:41:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3621068 00:05:13.572 11:41:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.572 11:41:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3621068 00:05:13.572 11:41:40 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3621068 ']' 00:05:13.572 11:41:40 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.572 11:41:40 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:13.572 11:41:40 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.572 11:41:40 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:13.572 11:41:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.572 [2024-05-14 11:41:40.600236] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:13.572 [2024-05-14 11:41:40.600314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3621068 ] 00:05:13.572 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.830 [2024-05-14 11:41:40.666117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.830 [2024-05-14 11:41:40.738949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.395 11:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:14.395 11:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:05:14.395 11:41:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3621068 00:05:14.395 11:41:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3621068 00:05:14.395 11:41:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.959 lslocks: write error 00:05:14.959 11:41:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3621068 00:05:14.959 11:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3621068 ']' 00:05:14.959 11:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3621068 00:05:14.959 11:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:05:14.959 11:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:14.959 11:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3621068 00:05:14.959 11:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:14.959 11:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:14.959 11:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3621068' 00:05:14.959 killing process with pid 3621068 00:05:14.959 11:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3621068 00:05:14.959 11:41:41 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3621068 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3621068 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3621068 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3621068 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3621068 ']' 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.216 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3621068) - No such process 00:05:15.216 ERROR: process (pid: 3621068) is no longer running 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:15.216 00:05:15.216 real 0m1.642s 00:05:15.216 user 0m1.719s 00:05:15.216 sys 0m0.580s 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.216 11:41:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.216 ************************************ 00:05:15.216 END TEST default_locks 00:05:15.216 ************************************ 00:05:15.216 11:41:42 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:15.216 11:41:42 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.216 11:41:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.216 11:41:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.216 ************************************ 00:05:15.216 START TEST default_locks_via_rpc 00:05:15.216 ************************************ 00:05:15.216 11:41:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:05:15.216 11:41:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3621378 00:05:15.216 11:41:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3621378 00:05:15.216 11:41:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.216 11:41:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3621378 ']' 00:05:15.216 11:41:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.216 11:41:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:15.216 11:41:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.474 11:41:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:15.474 11:41:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.474 [2024-05-14 11:41:42.329279] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:15.474 [2024-05-14 11:41:42.329362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3621378 ] 00:05:15.474 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.474 [2024-05-14 11:41:42.397928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.474 [2024-05-14 11:41:42.465477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3621378 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3621378 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3621378 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3621378 ']' 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3621378 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:16.408 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3621378 00:05:16.666 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:16.666 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:16.666 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3621378' 00:05:16.666 killing process with pid 3621378 00:05:16.666 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3621378 00:05:16.666 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3621378 00:05:16.924 00:05:16.924 real 0m1.528s 00:05:16.924 user 0m1.583s 00:05:16.924 sys 0m0.520s 00:05:16.924 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.924 11:41:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.924 ************************************ 00:05:16.924 END TEST default_locks_via_rpc 00:05:16.924 ************************************ 00:05:16.924 11:41:43 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:16.924 11:41:43 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.924 11:41:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.924 11:41:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.924 ************************************ 00:05:16.924 START TEST non_locking_app_on_locked_coremask 00:05:16.924 ************************************ 00:05:16.924 11:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:05:16.924 11:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3621674 00:05:16.924 11:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3621674 /var/tmp/spdk.sock 00:05:16.924 11:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3621674 ']' 00:05:16.924 11:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.924 11:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:16.924 11:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.924 11:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:16.924 11:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.924 11:41:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.924 [2024-05-14 11:41:43.938224] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:16.924 [2024-05-14 11:41:43.938305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3621674 ] 00:05:16.924 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.924 [2024-05-14 11:41:44.006321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.182 [2024-05-14 11:41:44.084706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.748 11:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:17.748 11:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:17.748 11:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3621847 00:05:17.748 11:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3621847 /var/tmp/spdk2.sock 00:05:17.748 11:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3621847 ']' 00:05:17.748 11:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.748 11:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:17.748 11:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:17.748 11:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.748 11:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:17.748 11:41:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.748 [2024-05-14 11:41:44.754671] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:17.748 [2024-05-14 11:41:44.754735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3621847 ] 00:05:17.748 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.006 [2024-05-14 11:41:44.847920] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.006 [2024-05-14 11:41:44.847948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.006 [2024-05-14 11:41:44.999808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.572 11:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:18.572 11:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:18.572 11:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3621674 00:05:18.572 11:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3621674 00:05:18.572 11:41:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.503 lslocks: write error 00:05:19.503 11:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3621674 00:05:19.503 11:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3621674 ']' 00:05:19.503 11:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3621674 00:05:19.503 11:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:19.503 11:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:19.503 11:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3621674 00:05:19.503 11:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:19.503 11:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:19.503 11:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3621674' 00:05:19.503 killing process with pid 3621674 00:05:19.503 11:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3621674 00:05:19.503 11:41:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3621674 00:05:20.068 11:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3621847 00:05:20.068 11:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3621847 ']' 00:05:20.068 11:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3621847 00:05:20.068 11:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:20.068 11:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:20.068 11:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3621847 00:05:20.325 11:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:20.325 11:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:20.325 11:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3621847' 00:05:20.325 killing process with pid 3621847 00:05:20.325 11:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3621847 00:05:20.325 11:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3621847 00:05:20.582 00:05:20.582 real 0m3.579s 00:05:20.582 user 0m3.799s 00:05:20.582 sys 0m1.131s 00:05:20.582 11:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.582 11:41:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.582 ************************************ 00:05:20.582 END TEST non_locking_app_on_locked_coremask 00:05:20.582 ************************************ 00:05:20.582 11:41:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:20.582 11:41:47 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:20.582 11:41:47 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.582 11:41:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.582 ************************************ 00:05:20.582 START TEST locking_app_on_unlocked_coremask 00:05:20.582 ************************************ 00:05:20.582 11:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:05:20.582 11:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3622278 00:05:20.582 11:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3622278 /var/tmp/spdk.sock 00:05:20.582 11:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3622278 ']' 00:05:20.582 11:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.582 11:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:20.582 11:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.582 11:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:20.582 11:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.582 11:41:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:20.582 [2024-05-14 11:41:47.604447] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:20.582 [2024-05-14 11:41:47.604530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3622278 ] 00:05:20.582 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.840 [2024-05-14 11:41:47.675099] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.840 [2024-05-14 11:41:47.675123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.840 [2024-05-14 11:41:47.752986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.404 11:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:21.404 11:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:21.404 11:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3622517 00:05:21.404 11:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3622517 /var/tmp/spdk2.sock 00:05:21.404 11:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3622517 ']' 00:05:21.404 11:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.404 11:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:21.404 11:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.404 11:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:21.404 11:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.404 11:41:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:21.404 [2024-05-14 11:41:48.446098] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:21.404 [2024-05-14 11:41:48.446183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3622517 ] 00:05:21.404 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.661 [2024-05-14 11:41:48.536809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.661 [2024-05-14 11:41:48.679905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.228 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:22.228 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:22.228 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3622517 00:05:22.228 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3622517 00:05:22.228 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.794 lslocks: write error 00:05:22.794 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3622278 00:05:22.794 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3622278 ']' 00:05:22.794 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3622278 00:05:22.794 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:22.794 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:22.794 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3622278 00:05:22.794 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:22.794 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:22.794 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3622278' 00:05:22.794 killing process with pid 3622278 00:05:22.794 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3622278 00:05:22.794 11:41:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3622278 00:05:23.361 11:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3622517 00:05:23.361 11:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3622517 ']' 00:05:23.361 11:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3622517 00:05:23.361 11:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:23.361 11:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:23.361 11:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3622517 00:05:23.361 11:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:23.361 11:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:23.361 11:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3622517' 00:05:23.361 killing process with pid 3622517 00:05:23.361 11:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3622517 00:05:23.361 11:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3622517 00:05:23.928 00:05:23.928 real 0m3.140s 00:05:23.928 user 0m3.343s 00:05:23.928 sys 0m0.974s 00:05:23.928 11:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.928 11:41:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.928 ************************************ 00:05:23.928 END TEST locking_app_on_unlocked_coremask 00:05:23.928 ************************************ 00:05:23.928 11:41:50 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:23.928 11:41:50 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.928 11:41:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.928 11:41:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.928 ************************************ 00:05:23.928 START TEST locking_app_on_locked_coremask 00:05:23.928 ************************************ 00:05:23.928 11:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:05:23.928 11:41:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3622939 00:05:23.928 11:41:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3622939 /var/tmp/spdk.sock 00:05:23.928 11:41:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.928 11:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3622939 ']' 00:05:23.928 11:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.928 11:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:23.928 11:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.928 11:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:23.928 11:41:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.928 [2024-05-14 11:41:50.830256] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:23.928 [2024-05-14 11:41:50.830345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3622939 ] 00:05:23.928 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.928 [2024-05-14 11:41:50.899334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.928 [2024-05-14 11:41:50.971637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3623090 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3623090 /var/tmp/spdk2.sock 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3623090 /var/tmp/spdk2.sock 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3623090 /var/tmp/spdk2.sock 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3623090 ']' 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:24.862 11:41:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.862 [2024-05-14 11:41:51.661159] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:24.862 [2024-05-14 11:41:51.661225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3623090 ] 00:05:24.862 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.862 [2024-05-14 11:41:51.749283] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3622939 has claimed it. 00:05:24.862 [2024-05-14 11:41:51.749324] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:25.442 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3623090) - No such process 00:05:25.442 ERROR: process (pid: 3623090) is no longer running 00:05:25.442 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:25.442 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:25.442 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:25.442 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.442 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.442 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.442 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3622939 00:05:25.442 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3622939 00:05:25.442 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.008 lslocks: write error 00:05:26.008 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3622939 00:05:26.008 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3622939 ']' 00:05:26.008 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3622939 00:05:26.008 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:26.008 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:26.008 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3622939 00:05:26.008 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:26.008 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:26.008 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3622939' 00:05:26.008 killing process with pid 3622939 00:05:26.008 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3622939 00:05:26.008 11:41:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3622939 00:05:26.266 00:05:26.266 real 0m2.487s 00:05:26.266 user 0m2.690s 00:05:26.266 sys 0m0.765s 00:05:26.266 11:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.266 11:41:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.266 ************************************ 00:05:26.266 END TEST locking_app_on_locked_coremask 00:05:26.266 ************************************ 00:05:26.266 11:41:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:26.266 11:41:53 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.266 11:41:53 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.266 11:41:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.526 ************************************ 00:05:26.526 START TEST locking_overlapped_coremask 00:05:26.526 ************************************ 00:05:26.526 11:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:05:26.526 11:41:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3623391 00:05:26.526 11:41:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3623391 /var/tmp/spdk.sock 00:05:26.526 11:41:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:26.526 11:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3623391 ']' 00:05:26.526 11:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.526 11:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:26.526 11:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.526 11:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:26.526 11:41:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.526 [2024-05-14 11:41:53.408032] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:26.526 [2024-05-14 11:41:53.408100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3623391 ] 00:05:26.526 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.526 [2024-05-14 11:41:53.478241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.526 [2024-05-14 11:41:53.558037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.526 [2024-05-14 11:41:53.558056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.526 [2024-05-14 11:41:53.558059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.157 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:27.157 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:27.157 11:41:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3623658 00:05:27.157 11:41:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3623658 /var/tmp/spdk2.sock 00:05:27.157 11:41:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:27.157 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:27.157 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3623658 /var/tmp/spdk2.sock 00:05:27.157 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:27.415 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.415 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:27.415 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.415 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3623658 /var/tmp/spdk2.sock 00:05:27.415 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3623658 ']' 00:05:27.415 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.416 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:27.416 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.416 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:27.416 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.416 [2024-05-14 11:41:54.268763] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:27.416 [2024-05-14 11:41:54.268852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3623658 ] 00:05:27.416 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.416 [2024-05-14 11:41:54.361708] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3623391 has claimed it. 00:05:27.416 [2024-05-14 11:41:54.361745] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:27.982 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3623658) - No such process 00:05:27.982 ERROR: process (pid: 3623658) is no longer running 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3623391 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3623391 ']' 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3623391 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3623391 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3623391' 00:05:27.982 killing process with pid 3623391 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3623391 00:05:27.982 11:41:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3623391 00:05:28.240 00:05:28.240 real 0m1.895s 00:05:28.240 user 0m5.324s 00:05:28.240 sys 0m0.462s 00:05:28.240 11:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.240 11:41:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.240 ************************************ 00:05:28.240 END TEST locking_overlapped_coremask 00:05:28.240 ************************************ 00:05:28.240 11:41:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:28.240 11:41:55 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.240 11:41:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.240 11:41:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.499 ************************************ 00:05:28.499 START TEST locking_overlapped_coremask_via_rpc 00:05:28.499 ************************************ 00:05:28.499 11:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:05:28.499 11:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3623798 00:05:28.499 11:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3623798 /var/tmp/spdk.sock 00:05:28.499 11:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:28.499 11:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3623798 ']' 00:05:28.499 11:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.499 11:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:28.499 11:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.499 11:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:28.499 11:41:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.499 [2024-05-14 11:41:55.392456] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:28.499 [2024-05-14 11:41:55.392539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3623798 ] 00:05:28.499 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.499 [2024-05-14 11:41:55.462715] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.499 [2024-05-14 11:41:55.462739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.499 [2024-05-14 11:41:55.542841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.499 [2024-05-14 11:41:55.542936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.499 [2024-05-14 11:41:55.542938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.433 11:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:29.433 11:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:29.433 11:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3623971 00:05:29.433 11:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3623971 /var/tmp/spdk2.sock 00:05:29.433 11:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:29.433 11:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3623971 ']' 00:05:29.433 11:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.433 11:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.433 11:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.433 11:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.433 11:41:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.433 [2024-05-14 11:41:56.244895] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:29.433 [2024-05-14 11:41:56.244969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3623971 ] 00:05:29.433 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.433 [2024-05-14 11:41:56.338641] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.433 [2024-05-14 11:41:56.338669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.433 [2024-05-14 11:41:56.490793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.433 [2024-05-14 11:41:56.494427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.433 [2024-05-14 11:41:56.494428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.999 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.257 [2024-05-14 11:41:57.094452] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3623798 has claimed it. 00:05:30.257 request: 00:05:30.257 { 00:05:30.257 "method": "framework_enable_cpumask_locks", 00:05:30.257 "req_id": 1 00:05:30.257 } 00:05:30.257 Got JSON-RPC error response 00:05:30.257 response: 00:05:30.257 { 00:05:30.257 "code": -32603, 00:05:30.257 "message": "Failed to claim CPU core: 2" 00:05:30.257 } 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3623798 /var/tmp/spdk.sock 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3623798 ']' 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3623971 /var/tmp/spdk2.sock 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3623971 ']' 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:30.257 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.516 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.516 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:30.516 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:30.516 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:30.516 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:30.516 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:30.516 00:05:30.516 real 0m2.113s 00:05:30.516 user 0m0.825s 00:05:30.516 sys 0m0.214s 00:05:30.516 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.516 11:41:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.516 ************************************ 00:05:30.516 END TEST locking_overlapped_coremask_via_rpc 00:05:30.516 ************************************ 00:05:30.516 11:41:57 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:30.516 11:41:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3623798 ]] 00:05:30.516 11:41:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3623798 00:05:30.516 11:41:57 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3623798 ']' 00:05:30.516 11:41:57 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3623798 00:05:30.516 11:41:57 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:30.516 11:41:57 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:30.516 11:41:57 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3623798 00:05:30.516 11:41:57 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:30.516 11:41:57 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:30.517 11:41:57 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3623798' 00:05:30.517 killing process with pid 3623798 00:05:30.517 11:41:57 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3623798 00:05:30.517 11:41:57 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3623798 00:05:31.085 11:41:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3623971 ]] 00:05:31.085 11:41:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3623971 00:05:31.085 11:41:57 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3623971 ']' 00:05:31.085 11:41:57 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3623971 00:05:31.085 11:41:57 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:31.085 11:41:57 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:31.085 11:41:57 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3623971 00:05:31.086 11:41:57 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:31.086 11:41:57 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:31.086 11:41:57 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3623971' 00:05:31.086 killing process with pid 3623971 00:05:31.086 11:41:57 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3623971 00:05:31.086 11:41:57 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3623971 00:05:31.344 11:41:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:31.344 11:41:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:31.344 11:41:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3623798 ]] 00:05:31.344 11:41:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3623798 00:05:31.344 11:41:58 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3623798 ']' 00:05:31.344 11:41:58 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3623798 00:05:31.344 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3623798) - No such process 00:05:31.344 11:41:58 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3623798 is not found' 00:05:31.344 Process with pid 3623798 is not found 00:05:31.344 11:41:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3623971 ]] 00:05:31.344 11:41:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3623971 00:05:31.344 11:41:58 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3623971 ']' 00:05:31.344 11:41:58 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3623971 00:05:31.344 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3623971) - No such process 00:05:31.344 11:41:58 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3623971 is not found' 00:05:31.344 Process with pid 3623971 is not found 00:05:31.344 11:41:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:31.344 00:05:31.344 real 0m17.832s 00:05:31.344 user 0m29.955s 00:05:31.344 sys 0m5.694s 00:05:31.344 11:41:58 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.344 11:41:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.344 ************************************ 00:05:31.344 END TEST cpu_locks 00:05:31.344 ************************************ 00:05:31.344 00:05:31.344 real 0m43.415s 00:05:31.344 user 1m21.297s 00:05:31.344 sys 0m9.955s 00:05:31.344 11:41:58 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.344 11:41:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.344 ************************************ 00:05:31.344 END TEST event 00:05:31.344 ************************************ 00:05:31.344 11:41:58 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:31.344 11:41:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.344 11:41:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.344 11:41:58 -- common/autotest_common.sh@10 -- # set +x 00:05:31.344 ************************************ 00:05:31.344 START TEST thread 00:05:31.344 ************************************ 00:05:31.344 11:41:58 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:31.602 * Looking for test storage... 00:05:31.602 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:31.602 11:41:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:31.602 11:41:58 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:31.602 11:41:58 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.602 11:41:58 thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.602 ************************************ 00:05:31.602 START TEST thread_poller_perf 00:05:31.602 ************************************ 00:05:31.602 11:41:58 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:31.602 [2024-05-14 11:41:58.544601] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:31.602 [2024-05-14 11:41:58.544682] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3624488 ] 00:05:31.602 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.602 [2024-05-14 11:41:58.615650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.602 [2024-05-14 11:41:58.687847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.602 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:32.976 ====================================== 00:05:32.976 busy:2504524918 (cyc) 00:05:32.976 total_run_count: 873000 00:05:32.976 tsc_hz: 2500000000 (cyc) 00:05:32.976 ====================================== 00:05:32.976 poller_cost: 2868 (cyc), 1147 (nsec) 00:05:32.976 00:05:32.976 real 0m1.226s 00:05:32.976 user 0m1.130s 00:05:32.976 sys 0m0.093s 00:05:32.976 11:41:59 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.976 11:41:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.976 ************************************ 00:05:32.976 END TEST thread_poller_perf 00:05:32.976 ************************************ 00:05:32.976 11:41:59 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:32.976 11:41:59 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:32.976 11:41:59 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.976 11:41:59 thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.976 ************************************ 00:05:32.976 START TEST thread_poller_perf 00:05:32.976 ************************************ 00:05:32.976 11:41:59 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:32.976 [2024-05-14 11:41:59.864973] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:32.977 [2024-05-14 11:41:59.865090] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3624660 ] 00:05:32.977 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.977 [2024-05-14 11:41:59.936924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.977 [2024-05-14 11:42:00.011987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.977 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:34.350 ====================================== 00:05:34.350 busy:2501323020 (cyc) 00:05:34.350 total_run_count: 13306000 00:05:34.350 tsc_hz: 2500000000 (cyc) 00:05:34.350 ====================================== 00:05:34.350 poller_cost: 187 (cyc), 74 (nsec) 00:05:34.350 00:05:34.350 real 0m1.233s 00:05:34.350 user 0m1.138s 00:05:34.350 sys 0m0.090s 00:05:34.350 11:42:01 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.350 11:42:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.350 ************************************ 00:05:34.351 END TEST thread_poller_perf 00:05:34.351 ************************************ 00:05:34.351 11:42:01 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:34.351 11:42:01 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:34.351 11:42:01 thread -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.351 11:42:01 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.351 11:42:01 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.351 ************************************ 00:05:34.351 START TEST thread_spdk_lock 00:05:34.351 ************************************ 00:05:34.351 11:42:01 thread.thread_spdk_lock -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:34.351 [2024-05-14 11:42:01.170856] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:34.351 [2024-05-14 11:42:01.170970] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3625000 ] 00:05:34.351 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.351 [2024-05-14 11:42:01.242072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.351 [2024-05-14 11:42:01.314021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.351 [2024-05-14 11:42:01.314024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.918 [2024-05-14 11:42:01.810873] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:34.919 [2024-05-14 11:42:01.810909] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:34.919 [2024-05-14 11:42:01.810920] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x14cb400 00:05:34.919 [2024-05-14 11:42:01.811818] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:34.919 [2024-05-14 11:42:01.811921] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:34.919 [2024-05-14 11:42:01.811942] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:34.919 Starting test contend 00:05:34.919 Worker Delay Wait us Hold us Total us 00:05:34.919 0 3 175794 189837 365631 00:05:34.919 1 5 93995 289216 383212 00:05:34.919 PASS test contend 00:05:34.919 Starting test hold_by_poller 00:05:34.919 PASS test hold_by_poller 00:05:34.919 Starting test hold_by_message 00:05:34.919 PASS test hold_by_message 00:05:34.919 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:34.919 100014 assertions passed 00:05:34.919 0 assertions failed 00:05:34.919 00:05:34.919 real 0m0.721s 00:05:34.919 user 0m1.127s 00:05:34.919 sys 0m0.088s 00:05:34.919 11:42:01 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.919 11:42:01 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:34.919 ************************************ 00:05:34.919 END TEST thread_spdk_lock 00:05:34.919 ************************************ 00:05:34.919 00:05:34.919 real 0m3.536s 00:05:34.919 user 0m3.515s 00:05:34.919 sys 0m0.520s 00:05:34.919 11:42:01 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.919 11:42:01 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.919 ************************************ 00:05:34.919 END TEST thread 00:05:34.919 ************************************ 00:05:34.919 11:42:01 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:34.919 11:42:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.919 11:42:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.919 11:42:01 -- common/autotest_common.sh@10 -- # set +x 00:05:34.919 ************************************ 00:05:34.919 START TEST accel 00:05:34.919 ************************************ 00:05:34.919 11:42:01 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:05:35.179 * Looking for test storage... 00:05:35.179 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:05:35.179 11:42:02 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:35.179 11:42:02 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:35.179 11:42:02 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.179 11:42:02 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3625355 00:05:35.179 11:42:02 accel -- accel/accel.sh@63 -- # waitforlisten 3625355 00:05:35.179 11:42:02 accel -- common/autotest_common.sh@827 -- # '[' -z 3625355 ']' 00:05:35.179 11:42:02 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.179 11:42:02 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:35.179 11:42:02 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:35.179 11:42:02 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:35.179 11:42:02 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.179 11:42:02 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:35.179 11:42:02 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.179 11:42:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.179 11:42:02 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.179 11:42:02 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.179 11:42:02 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.179 11:42:02 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.179 11:42:02 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:35.179 11:42:02 accel -- accel/accel.sh@41 -- # jq -r . 00:05:35.179 [2024-05-14 11:42:02.129622] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:35.179 [2024-05-14 11:42:02.129683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3625355 ] 00:05:35.179 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.179 [2024-05-14 11:42:02.198914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.437 [2024-05-14 11:42:02.274216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.006 11:42:02 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:36.006 11:42:02 accel -- common/autotest_common.sh@860 -- # return 0 00:05:36.006 11:42:02 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:36.006 11:42:02 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:36.006 11:42:02 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:36.006 11:42:02 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:36.006 11:42:02 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:36.006 11:42:02 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:36.006 11:42:02 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:36.006 11:42:02 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.006 11:42:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.006 11:42:02 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.006 11:42:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.006 11:42:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.006 11:42:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.006 11:42:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.006 11:42:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.006 11:42:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.006 11:42:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.006 11:42:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.006 11:42:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.006 11:42:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.006 11:42:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.006 11:42:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.006 11:42:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.006 11:42:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.006 11:42:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.006 11:42:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.006 11:42:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.006 11:42:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.006 11:42:03 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.006 11:42:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.006 11:42:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.006 11:42:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.006 11:42:03 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.006 11:42:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.006 11:42:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.006 11:42:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.006 11:42:03 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.006 11:42:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.006 11:42:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.006 11:42:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.006 11:42:03 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.006 11:42:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.006 11:42:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.006 11:42:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.006 11:42:03 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.006 11:42:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.006 11:42:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.006 11:42:03 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.006 11:42:03 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.006 11:42:03 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.006 11:42:03 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.006 11:42:03 accel -- accel/accel.sh@75 -- # killprocess 3625355 00:05:36.006 11:42:03 accel -- common/autotest_common.sh@946 -- # '[' -z 3625355 ']' 00:05:36.006 11:42:03 accel -- common/autotest_common.sh@950 -- # kill -0 3625355 00:05:36.006 11:42:03 accel -- common/autotest_common.sh@951 -- # uname 00:05:36.006 11:42:03 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:36.006 11:42:03 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3625355 00:05:36.006 11:42:03 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:36.006 11:42:03 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:36.006 11:42:03 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3625355' 00:05:36.006 killing process with pid 3625355 00:05:36.006 11:42:03 accel -- common/autotest_common.sh@965 -- # kill 3625355 00:05:36.006 11:42:03 accel -- common/autotest_common.sh@970 -- # wait 3625355 00:05:36.574 11:42:03 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:36.574 11:42:03 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:36.574 11:42:03 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:36.574 11:42:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.574 11:42:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.574 11:42:03 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:05:36.574 11:42:03 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:36.574 11:42:03 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:36.574 11:42:03 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.574 11:42:03 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.574 11:42:03 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.574 11:42:03 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.574 11:42:03 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.574 11:42:03 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:36.574 11:42:03 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:36.574 11:42:03 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.574 11:42:03 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:36.574 11:42:03 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:36.574 11:42:03 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:36.574 11:42:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.574 11:42:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.574 ************************************ 00:05:36.574 START TEST accel_missing_filename 00:05:36.574 ************************************ 00:05:36.574 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:05:36.574 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:36.574 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:36.574 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:36.574 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.574 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:36.574 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.574 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:36.574 11:42:03 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:36.574 11:42:03 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:36.574 11:42:03 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.574 11:42:03 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.574 11:42:03 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.574 11:42:03 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.574 11:42:03 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.574 11:42:03 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:36.574 11:42:03 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:36.574 [2024-05-14 11:42:03.548086] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:36.574 [2024-05-14 11:42:03.548171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3625662 ] 00:05:36.574 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.574 [2024-05-14 11:42:03.620509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.832 [2024-05-14 11:42:03.700537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.832 [2024-05-14 11:42:03.740610] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:36.832 [2024-05-14 11:42:03.800896] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:36.832 A filename is required. 00:05:36.832 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:36.832 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.832 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:36.832 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:36.832 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:36.832 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.832 00:05:36.832 real 0m0.346s 00:05:36.832 user 0m0.241s 00:05:36.832 sys 0m0.144s 00:05:36.832 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.832 11:42:03 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:36.832 ************************************ 00:05:36.832 END TEST accel_missing_filename 00:05:36.832 ************************************ 00:05:36.832 11:42:03 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:36.832 11:42:03 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:36.832 11:42:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.832 11:42:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.091 ************************************ 00:05:37.091 START TEST accel_compress_verify 00:05:37.091 ************************************ 00:05:37.091 11:42:03 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:37.091 11:42:03 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:37.091 11:42:03 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:37.091 11:42:03 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:37.091 11:42:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.091 11:42:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:37.091 11:42:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.091 11:42:03 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:37.091 11:42:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:37.091 11:42:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:37.091 11:42:03 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.091 11:42:03 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.091 11:42:03 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.091 11:42:03 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.091 11:42:03 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.091 11:42:03 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:37.091 11:42:03 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:37.091 [2024-05-14 11:42:03.981943] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:37.091 [2024-05-14 11:42:03.982025] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3625783 ] 00:05:37.091 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.091 [2024-05-14 11:42:04.054266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.091 [2024-05-14 11:42:04.125129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.091 [2024-05-14 11:42:04.165015] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:37.351 [2024-05-14 11:42:04.225297] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:37.351 00:05:37.351 Compression does not support the verify option, aborting. 00:05:37.351 11:42:04 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:37.351 11:42:04 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.351 11:42:04 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:37.351 11:42:04 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:37.351 11:42:04 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:37.351 11:42:04 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.351 00:05:37.351 real 0m0.336s 00:05:37.351 user 0m0.232s 00:05:37.351 sys 0m0.142s 00:05:37.351 11:42:04 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.351 11:42:04 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:37.351 ************************************ 00:05:37.351 END TEST accel_compress_verify 00:05:37.351 ************************************ 00:05:37.351 11:42:04 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:37.351 11:42:04 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:37.351 11:42:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.351 11:42:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.351 ************************************ 00:05:37.351 START TEST accel_wrong_workload 00:05:37.351 ************************************ 00:05:37.351 11:42:04 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:05:37.351 11:42:04 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:37.351 11:42:04 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:37.351 11:42:04 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:37.351 11:42:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.351 11:42:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:37.351 11:42:04 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.351 11:42:04 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:37.351 11:42:04 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:37.351 11:42:04 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:37.351 11:42:04 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.351 11:42:04 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.351 11:42:04 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.351 11:42:04 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.351 11:42:04 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.351 11:42:04 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:37.351 11:42:04 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:37.351 Unsupported workload type: foobar 00:05:37.352 [2024-05-14 11:42:04.410023] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:37.352 accel_perf options: 00:05:37.352 [-h help message] 00:05:37.352 [-q queue depth per core] 00:05:37.352 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:37.352 [-T number of threads per core 00:05:37.352 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:37.352 [-t time in seconds] 00:05:37.352 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:37.352 [ dif_verify, , dif_generate, dif_generate_copy 00:05:37.352 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:37.352 [-l for compress/decompress workloads, name of uncompressed input file 00:05:37.352 [-S for crc32c workload, use this seed value (default 0) 00:05:37.352 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:37.352 [-f for fill workload, use this BYTE value (default 255) 00:05:37.352 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:37.352 [-y verify result if this switch is on] 00:05:37.352 [-a tasks to allocate per core (default: same value as -q)] 00:05:37.352 Can be used to spread operations across a wider range of memory. 00:05:37.352 11:42:04 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:37.352 11:42:04 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.352 11:42:04 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.352 11:42:04 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.352 00:05:37.352 real 0m0.031s 00:05:37.352 user 0m0.015s 00:05:37.352 sys 0m0.016s 00:05:37.352 11:42:04 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.352 11:42:04 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:37.352 ************************************ 00:05:37.352 END TEST accel_wrong_workload 00:05:37.352 ************************************ 00:05:37.352 Error: writing output failed: Broken pipe 00:05:37.611 11:42:04 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:37.611 11:42:04 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:37.611 11:42:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.611 11:42:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.611 ************************************ 00:05:37.611 START TEST accel_negative_buffers 00:05:37.611 ************************************ 00:05:37.611 11:42:04 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:37.611 11:42:04 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:37.611 11:42:04 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:37.611 11:42:04 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:37.611 11:42:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.611 11:42:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:37.611 11:42:04 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.611 11:42:04 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:37.611 11:42:04 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:37.611 11:42:04 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:37.611 11:42:04 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.611 11:42:04 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.611 11:42:04 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.611 11:42:04 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.611 11:42:04 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.611 11:42:04 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:37.611 11:42:04 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:37.611 -x option must be non-negative. 00:05:37.611 [2024-05-14 11:42:04.509989] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:37.611 accel_perf options: 00:05:37.611 [-h help message] 00:05:37.611 [-q queue depth per core] 00:05:37.611 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:37.611 [-T number of threads per core 00:05:37.611 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:37.611 [-t time in seconds] 00:05:37.611 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:37.611 [ dif_verify, , dif_generate, dif_generate_copy 00:05:37.611 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:37.611 [-l for compress/decompress workloads, name of uncompressed input file 00:05:37.611 [-S for crc32c workload, use this seed value (default 0) 00:05:37.611 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:37.611 [-f for fill workload, use this BYTE value (default 255) 00:05:37.611 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:37.611 [-y verify result if this switch is on] 00:05:37.611 [-a tasks to allocate per core (default: same value as -q)] 00:05:37.611 Can be used to spread operations across a wider range of memory. 00:05:37.611 11:42:04 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:37.611 11:42:04 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.611 11:42:04 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.611 11:42:04 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.611 00:05:37.611 real 0m0.015s 00:05:37.611 user 0m0.005s 00:05:37.611 sys 0m0.010s 00:05:37.611 11:42:04 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.611 11:42:04 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:37.611 ************************************ 00:05:37.611 END TEST accel_negative_buffers 00:05:37.611 ************************************ 00:05:37.611 Error: writing output failed: Broken pipe 00:05:37.611 11:42:04 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:37.611 11:42:04 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:37.611 11:42:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.611 11:42:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.611 ************************************ 00:05:37.611 START TEST accel_crc32c 00:05:37.611 ************************************ 00:05:37.611 11:42:04 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:37.611 11:42:04 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:37.611 11:42:04 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:37.611 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.611 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.611 11:42:04 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:37.611 11:42:04 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:37.611 11:42:04 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:37.611 11:42:04 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.611 11:42:04 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.611 11:42:04 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.611 11:42:04 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.611 11:42:04 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.611 11:42:04 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:37.611 11:42:04 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:37.611 [2024-05-14 11:42:04.616391] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:37.611 [2024-05-14 11:42:04.616489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3626093 ] 00:05:37.611 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.611 [2024-05-14 11:42:04.688018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.871 [2024-05-14 11:42:04.766318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.871 11:42:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:39.249 11:42:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.249 00:05:39.249 real 0m1.341s 00:05:39.249 user 0m1.213s 00:05:39.249 sys 0m0.131s 00:05:39.249 11:42:05 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.249 11:42:05 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:39.249 ************************************ 00:05:39.249 END TEST accel_crc32c 00:05:39.249 ************************************ 00:05:39.249 11:42:05 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:39.249 11:42:05 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:39.249 11:42:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.249 11:42:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.249 ************************************ 00:05:39.249 START TEST accel_crc32c_C2 00:05:39.249 ************************************ 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:39.249 [2024-05-14 11:42:06.037220] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:39.249 [2024-05-14 11:42:06.037317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3626503 ] 00:05:39.249 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.249 [2024-05-14 11:42:06.107429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.249 [2024-05-14 11:42:06.178904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.249 11:42:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.626 00:05:40.626 real 0m1.334s 00:05:40.626 user 0m1.209s 00:05:40.626 sys 0m0.133s 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.626 11:42:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:40.626 ************************************ 00:05:40.626 END TEST accel_crc32c_C2 00:05:40.626 ************************************ 00:05:40.626 11:42:07 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:40.626 11:42:07 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:40.626 11:42:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.626 11:42:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.626 ************************************ 00:05:40.626 START TEST accel_copy 00:05:40.626 ************************************ 00:05:40.626 11:42:07 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:40.626 [2024-05-14 11:42:07.455478] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:40.626 [2024-05-14 11:42:07.455558] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3626760 ] 00:05:40.626 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.626 [2024-05-14 11:42:07.527366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.626 [2024-05-14 11:42:07.601432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.626 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:40.627 11:42:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:42.002 11:42:08 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.002 00:05:42.002 real 0m1.338s 00:05:42.002 user 0m1.205s 00:05:42.002 sys 0m0.137s 00:05:42.002 11:42:08 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.002 11:42:08 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:42.002 ************************************ 00:05:42.002 END TEST accel_copy 00:05:42.002 ************************************ 00:05:42.002 11:42:08 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.002 11:42:08 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:42.002 11:42:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.002 11:42:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.002 ************************************ 00:05:42.002 START TEST accel_fill 00:05:42.002 ************************************ 00:05:42.002 11:42:08 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.002 11:42:08 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:42.002 11:42:08 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:42.002 11:42:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.002 11:42:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.002 11:42:08 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.002 11:42:08 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.002 11:42:08 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:42.002 11:42:08 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.002 11:42:08 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.002 11:42:08 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.002 11:42:08 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.002 11:42:08 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.002 11:42:08 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:42.002 11:42:08 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:42.003 [2024-05-14 11:42:08.871426] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:42.003 [2024-05-14 11:42:08.871514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3627042 ] 00:05:42.003 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.003 [2024-05-14 11:42:08.943530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.003 [2024-05-14 11:42:09.014821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.003 11:42:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.379 11:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.380 11:42:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:43.380 11:42:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:43.380 11:42:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:43.380 11:42:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:43.380 11:42:10 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.380 11:42:10 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:43.380 11:42:10 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.380 00:05:43.380 real 0m1.333s 00:05:43.380 user 0m1.212s 00:05:43.380 sys 0m0.125s 00:05:43.380 11:42:10 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.380 11:42:10 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:43.380 ************************************ 00:05:43.380 END TEST accel_fill 00:05:43.380 ************************************ 00:05:43.380 11:42:10 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:43.380 11:42:10 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:43.380 11:42:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.380 11:42:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.380 ************************************ 00:05:43.380 START TEST accel_copy_crc32c 00:05:43.380 ************************************ 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:43.380 [2024-05-14 11:42:10.285962] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:43.380 [2024-05-14 11:42:10.286052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3627321 ] 00:05:43.380 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.380 [2024-05-14 11:42:10.355525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.380 [2024-05-14 11:42:10.427599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.380 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.639 11:42:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.575 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.575 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.575 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.575 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.576 00:05:44.576 real 0m1.332s 00:05:44.576 user 0m1.213s 00:05:44.576 sys 0m0.124s 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.576 11:42:11 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:44.576 ************************************ 00:05:44.576 END TEST accel_copy_crc32c 00:05:44.576 ************************************ 00:05:44.576 11:42:11 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:44.576 11:42:11 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:44.576 11:42:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.576 11:42:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.835 ************************************ 00:05:44.835 START TEST accel_copy_crc32c_C2 00:05:44.835 ************************************ 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:44.835 [2024-05-14 11:42:11.698484] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:44.835 [2024-05-14 11:42:11.698566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3627608 ] 00:05:44.835 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.835 [2024-05-14 11:42:11.768109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.835 [2024-05-14 11:42:11.839305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.835 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:44.836 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.836 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.836 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.836 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.836 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.836 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.836 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.836 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.836 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.836 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.836 11:42:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.212 00:05:46.212 real 0m1.330s 00:05:46.212 user 0m1.214s 00:05:46.212 sys 0m0.121s 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.212 11:42:13 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:46.212 ************************************ 00:05:46.212 END TEST accel_copy_crc32c_C2 00:05:46.212 ************************************ 00:05:46.212 11:42:13 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:46.212 11:42:13 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:46.212 11:42:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.212 11:42:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.212 ************************************ 00:05:46.212 START TEST accel_dualcast 00:05:46.212 ************************************ 00:05:46.212 11:42:13 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:05:46.212 11:42:13 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:46.212 11:42:13 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:46.212 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.212 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.212 11:42:13 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:46.212 11:42:13 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:46.212 11:42:13 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:46.212 11:42:13 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.212 11:42:13 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.212 11:42:13 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.212 11:42:13 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.212 11:42:13 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.212 11:42:13 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:46.212 11:42:13 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:46.212 [2024-05-14 11:42:13.098067] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:46.212 [2024-05-14 11:42:13.098132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3627890 ] 00:05:46.213 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.213 [2024-05-14 11:42:13.167369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.213 [2024-05-14 11:42:13.238656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:46.213 11:42:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:47.590 11:42:14 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.590 00:05:47.590 real 0m1.329s 00:05:47.590 user 0m1.202s 00:05:47.590 sys 0m0.131s 00:05:47.590 11:42:14 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.590 11:42:14 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:47.590 ************************************ 00:05:47.590 END TEST accel_dualcast 00:05:47.590 ************************************ 00:05:47.590 11:42:14 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:47.590 11:42:14 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:47.590 11:42:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.590 11:42:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.590 ************************************ 00:05:47.590 START TEST accel_compare 00:05:47.590 ************************************ 00:05:47.590 11:42:14 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:05:47.590 11:42:14 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:47.590 11:42:14 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:47.590 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.590 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.590 11:42:14 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:47.590 11:42:14 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:47.590 11:42:14 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:47.590 11:42:14 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.590 11:42:14 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.590 11:42:14 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.590 11:42:14 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.590 11:42:14 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.590 11:42:14 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:47.590 11:42:14 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:47.590 [2024-05-14 11:42:14.513027] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:47.590 [2024-05-14 11:42:14.513116] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3628179 ] 00:05:47.590 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.590 [2024-05-14 11:42:14.583300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.590 [2024-05-14 11:42:14.655462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:47.849 11:42:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:48.801 11:42:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.801 00:05:48.801 real 0m1.332s 00:05:48.801 user 0m1.212s 00:05:48.801 sys 0m0.124s 00:05:48.801 11:42:15 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.801 11:42:15 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:48.801 ************************************ 00:05:48.801 END TEST accel_compare 00:05:48.801 ************************************ 00:05:48.801 11:42:15 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:48.801 11:42:15 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:48.801 11:42:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.801 11:42:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.060 ************************************ 00:05:49.060 START TEST accel_xor 00:05:49.060 ************************************ 00:05:49.060 11:42:15 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:05:49.060 11:42:15 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:49.060 11:42:15 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:49.060 11:42:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:15 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:49.060 11:42:15 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:49.060 11:42:15 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:49.060 11:42:15 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.060 11:42:15 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.060 11:42:15 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.060 11:42:15 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.060 11:42:15 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.060 11:42:15 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:49.060 11:42:15 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:49.060 [2024-05-14 11:42:15.927901] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:49.060 [2024-05-14 11:42:15.927981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3628464 ] 00:05:49.060 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.060 [2024-05-14 11:42:15.997748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.060 [2024-05-14 11:42:16.068853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.060 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.061 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.061 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:49.061 11:42:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:49.061 11:42:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:49.061 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:49.061 11:42:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.457 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.457 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.457 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.457 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.457 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.457 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.457 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.457 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.457 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.457 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.457 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.457 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.458 00:05:50.458 real 0m1.330s 00:05:50.458 user 0m1.206s 00:05:50.458 sys 0m0.129s 00:05:50.458 11:42:17 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.458 11:42:17 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:50.458 ************************************ 00:05:50.458 END TEST accel_xor 00:05:50.458 ************************************ 00:05:50.458 11:42:17 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:50.458 11:42:17 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:50.458 11:42:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.458 11:42:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.458 ************************************ 00:05:50.458 START TEST accel_xor 00:05:50.458 ************************************ 00:05:50.458 11:42:17 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:50.458 [2024-05-14 11:42:17.343786] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:50.458 [2024-05-14 11:42:17.343868] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3628744 ] 00:05:50.458 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.458 [2024-05-14 11:42:17.414091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.458 [2024-05-14 11:42:17.485844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.458 11:42:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:51.899 11:42:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.899 00:05:51.899 real 0m1.334s 00:05:51.899 user 0m1.212s 00:05:51.899 sys 0m0.127s 00:05:51.899 11:42:18 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.899 11:42:18 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:51.899 ************************************ 00:05:51.899 END TEST accel_xor 00:05:51.899 ************************************ 00:05:51.899 11:42:18 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:51.899 11:42:18 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:51.899 11:42:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.899 11:42:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.899 ************************************ 00:05:51.899 START TEST accel_dif_verify 00:05:51.899 ************************************ 00:05:51.899 11:42:18 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:51.899 [2024-05-14 11:42:18.761329] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:51.899 [2024-05-14 11:42:18.761428] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629018 ] 00:05:51.899 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.899 [2024-05-14 11:42:18.832521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.899 [2024-05-14 11:42:18.905758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.899 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.900 11:42:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:53.275 11:42:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.275 00:05:53.275 real 0m1.339s 00:05:53.275 user 0m1.214s 00:05:53.275 sys 0m0.131s 00:05:53.275 11:42:20 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.275 11:42:20 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:53.275 ************************************ 00:05:53.275 END TEST accel_dif_verify 00:05:53.275 ************************************ 00:05:53.275 11:42:20 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:53.275 11:42:20 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:53.275 11:42:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.275 11:42:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.275 ************************************ 00:05:53.275 START TEST accel_dif_generate 00:05:53.275 ************************************ 00:05:53.275 11:42:20 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:53.275 [2024-05-14 11:42:20.156197] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:53.275 [2024-05-14 11:42:20.156276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629224 ] 00:05:53.275 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.275 [2024-05-14 11:42:20.226363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.275 [2024-05-14 11:42:20.299918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.275 11:42:20 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.276 11:42:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:54.652 11:42:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.652 00:05:54.652 real 0m1.334s 00:05:54.652 user 0m1.207s 00:05:54.652 sys 0m0.132s 00:05:54.652 11:42:21 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.652 11:42:21 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:54.652 ************************************ 00:05:54.652 END TEST accel_dif_generate 00:05:54.652 ************************************ 00:05:54.652 11:42:21 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:54.652 11:42:21 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:54.652 11:42:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.652 11:42:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.652 ************************************ 00:05:54.652 START TEST accel_dif_generate_copy 00:05:54.652 ************************************ 00:05:54.652 11:42:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:05:54.652 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:54.652 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:54.652 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.652 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.652 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:54.652 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:54.652 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:54.652 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.652 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.652 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.652 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.652 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.652 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:54.652 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:54.652 [2024-05-14 11:42:21.575409] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:54.652 [2024-05-14 11:42:21.575491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629451 ] 00:05:54.652 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.652 [2024-05-14 11:42:21.646788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.652 [2024-05-14 11:42:21.718419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.911 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.912 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.912 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.912 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:54.912 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.912 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.912 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.912 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.912 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.912 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.912 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.912 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:54.912 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.912 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.912 11:42:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.847 00:05:55.847 real 0m1.340s 00:05:55.847 user 0m1.219s 00:05:55.847 sys 0m0.137s 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.847 11:42:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:55.847 ************************************ 00:05:55.847 END TEST accel_dif_generate_copy 00:05:55.847 ************************************ 00:05:56.106 11:42:22 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:56.106 11:42:22 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:56.106 11:42:22 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:56.106 11:42:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.106 11:42:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.106 ************************************ 00:05:56.106 START TEST accel_comp 00:05:56.106 ************************************ 00:05:56.106 11:42:22 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:56.106 11:42:22 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:56.106 11:42:22 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:56.106 11:42:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.106 11:42:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.106 11:42:22 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:56.106 11:42:22 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:56.106 11:42:22 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:56.106 11:42:22 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.106 11:42:22 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.106 11:42:22 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.106 11:42:22 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.106 11:42:22 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.106 11:42:22 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:56.106 11:42:22 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:56.106 [2024-05-14 11:42:23.005503] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:56.106 [2024-05-14 11:42:23.005586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629671 ] 00:05:56.106 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.106 [2024-05-14 11:42:23.077613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.106 [2024-05-14 11:42:23.151066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.106 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.106 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.106 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.106 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.106 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.106 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.106 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.106 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.365 11:42:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:57.301 11:42:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.301 00:05:57.301 real 0m1.344s 00:05:57.301 user 0m1.230s 00:05:57.301 sys 0m0.129s 00:05:57.301 11:42:24 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.301 11:42:24 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:57.301 ************************************ 00:05:57.301 END TEST accel_comp 00:05:57.301 ************************************ 00:05:57.301 11:42:24 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:57.301 11:42:24 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:57.302 11:42:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.302 11:42:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.561 ************************************ 00:05:57.561 START TEST accel_decomp 00:05:57.561 ************************************ 00:05:57.561 11:42:24 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:57.561 [2024-05-14 11:42:24.438233] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:57.561 [2024-05-14 11:42:24.438316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629925 ] 00:05:57.561 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.561 [2024-05-14 11:42:24.512958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.561 [2024-05-14 11:42:24.587537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.561 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.562 11:42:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:58.939 11:42:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.939 00:05:58.939 real 0m1.349s 00:05:58.939 user 0m1.230s 00:05:58.939 sys 0m0.134s 00:05:58.939 11:42:25 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.939 11:42:25 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:58.939 ************************************ 00:05:58.939 END TEST accel_decomp 00:05:58.939 ************************************ 00:05:58.939 11:42:25 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:58.939 11:42:25 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:58.939 11:42:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.939 11:42:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.939 ************************************ 00:05:58.939 START TEST accel_decmop_full 00:05:58.939 ************************************ 00:05:58.939 11:42:25 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:58.939 11:42:25 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:05:58.939 11:42:25 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:05:58.939 11:42:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.939 11:42:25 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:58.939 11:42:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.939 11:42:25 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:58.939 11:42:25 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:05:58.939 11:42:25 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.939 11:42:25 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.939 11:42:25 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.939 11:42:25 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.939 11:42:25 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.939 11:42:25 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:05:58.939 11:42:25 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:05:58.939 [2024-05-14 11:42:25.874800] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:05:58.939 [2024-05-14 11:42:25.874886] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3630213 ] 00:05:58.939 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.939 [2024-05-14 11:42:25.946057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.939 [2024-05-14 11:42:26.018004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.198 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.199 11:42:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:00.134 11:42:27 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:00.135 11:42:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.135 11:42:27 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.135 11:42:27 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.135 11:42:27 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:00.135 11:42:27 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.135 00:06:00.135 real 0m1.348s 00:06:00.135 user 0m1.226s 00:06:00.135 sys 0m0.135s 00:06:00.135 11:42:27 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.135 11:42:27 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:00.135 ************************************ 00:06:00.135 END TEST accel_decmop_full 00:06:00.135 ************************************ 00:06:00.393 11:42:27 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:00.393 11:42:27 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:00.393 11:42:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.393 11:42:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.393 ************************************ 00:06:00.393 START TEST accel_decomp_mcore 00:06:00.393 ************************************ 00:06:00.393 11:42:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:00.393 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:00.393 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:00.393 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.393 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.393 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:00.393 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:00.393 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:00.393 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.393 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.393 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.393 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.393 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.393 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:00.393 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:00.393 [2024-05-14 11:42:27.311685] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:00.393 [2024-05-14 11:42:27.311771] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3630498 ] 00:06:00.393 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.393 [2024-05-14 11:42:27.382696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.393 [2024-05-14 11:42:27.458416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.393 [2024-05-14 11:42:27.458514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.393 [2024-05-14 11:42:27.458596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.394 [2024-05-14 11:42:27.458597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.652 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.653 11:42:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.589 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.589 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.589 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.589 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.589 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.589 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.589 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.589 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.589 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.589 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.589 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.590 00:06:01.590 real 0m1.353s 00:06:01.590 user 0m4.550s 00:06:01.590 sys 0m0.146s 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.590 11:42:28 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:01.590 ************************************ 00:06:01.590 END TEST accel_decomp_mcore 00:06:01.590 ************************************ 00:06:01.849 11:42:28 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.849 11:42:28 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:01.849 11:42:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.849 11:42:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.849 ************************************ 00:06:01.849 START TEST accel_decomp_full_mcore 00:06:01.849 ************************************ 00:06:01.849 11:42:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.849 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:01.849 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:01.849 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.849 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.849 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.849 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.849 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:01.849 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.849 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.849 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.849 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.849 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.849 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:01.849 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:01.849 [2024-05-14 11:42:28.753281] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:01.849 [2024-05-14 11:42:28.753359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3630786 ] 00:06:01.849 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.849 [2024-05-14 11:42:28.823048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.849 [2024-05-14 11:42:28.897161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.849 [2024-05-14 11:42:28.897258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.849 [2024-05-14 11:42:28.897340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.849 [2024-05-14 11:42:28.897342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.109 11:42:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.046 00:06:03.046 real 0m1.361s 00:06:03.046 user 0m4.574s 00:06:03.046 sys 0m0.146s 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.046 11:42:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:03.046 ************************************ 00:06:03.046 END TEST accel_decomp_full_mcore 00:06:03.046 ************************************ 00:06:03.046 11:42:30 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:03.046 11:42:30 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:03.046 11:42:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.046 11:42:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.306 ************************************ 00:06:03.306 START TEST accel_decomp_mthread 00:06:03.306 ************************************ 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:03.306 [2024-05-14 11:42:30.191317] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:03.306 [2024-05-14 11:42:30.191409] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631076 ] 00:06:03.306 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.306 [2024-05-14 11:42:30.262605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.306 [2024-05-14 11:42:30.336622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.306 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.565 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.565 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.565 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.565 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:03.565 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.565 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.565 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.565 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.565 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.565 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.565 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.565 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.566 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.566 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.566 11:42:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.500 00:06:04.500 real 0m1.347s 00:06:04.500 user 0m1.225s 00:06:04.500 sys 0m0.139s 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.500 11:42:31 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:04.500 ************************************ 00:06:04.500 END TEST accel_decomp_mthread 00:06:04.500 ************************************ 00:06:04.500 11:42:31 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.500 11:42:31 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:04.500 11:42:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.500 11:42:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.759 ************************************ 00:06:04.759 START TEST accel_decomp_full_mthread 00:06:04.759 ************************************ 00:06:04.759 11:42:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.759 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:04.759 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:04.760 [2024-05-14 11:42:31.627694] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:04.760 [2024-05-14 11:42:31.627790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631356 ] 00:06:04.760 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.760 [2024-05-14 11:42:31.698299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.760 [2024-05-14 11:42:31.772403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.760 11:42:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.138 00:06:06.138 real 0m1.367s 00:06:06.138 user 0m1.246s 00:06:06.138 sys 0m0.136s 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.138 11:42:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:06.138 ************************************ 00:06:06.138 END TEST accel_decomp_full_mthread 00:06:06.138 ************************************ 00:06:06.138 11:42:33 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:06.138 11:42:33 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:06.138 11:42:33 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:06.138 11:42:33 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:06.138 11:42:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.138 11:42:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.138 11:42:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.138 11:42:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.138 11:42:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.138 11:42:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.138 11:42:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.138 11:42:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:06.138 11:42:33 accel -- accel/accel.sh@41 -- # jq -r . 00:06:06.138 ************************************ 00:06:06.138 START TEST accel_dif_functional_tests 00:06:06.138 ************************************ 00:06:06.138 11:42:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:06.138 [2024-05-14 11:42:33.086220] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:06.138 [2024-05-14 11:42:33.086300] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631644 ] 00:06:06.138 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.138 [2024-05-14 11:42:33.154409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.397 [2024-05-14 11:42:33.227801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.397 [2024-05-14 11:42:33.227895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.397 [2024-05-14 11:42:33.227895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.397 00:06:06.397 00:06:06.397 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.397 http://cunit.sourceforge.net/ 00:06:06.397 00:06:06.397 00:06:06.397 Suite: accel_dif 00:06:06.397 Test: verify: DIF generated, GUARD check ...passed 00:06:06.397 Test: verify: DIF generated, APPTAG check ...passed 00:06:06.397 Test: verify: DIF generated, REFTAG check ...passed 00:06:06.397 Test: verify: DIF not generated, GUARD check ...[2024-05-14 11:42:33.295368] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:06.397 [2024-05-14 11:42:33.295420] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:06.397 passed 00:06:06.397 Test: verify: DIF not generated, APPTAG check ...[2024-05-14 11:42:33.295455] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:06.397 [2024-05-14 11:42:33.295474] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:06.397 passed 00:06:06.397 Test: verify: DIF not generated, REFTAG check ...[2024-05-14 11:42:33.295494] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:06.397 [2024-05-14 11:42:33.295513] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:06.397 passed 00:06:06.397 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:06.397 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-14 11:42:33.295572] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:06.397 passed 00:06:06.397 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:06.397 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:06.397 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:06.397 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-14 11:42:33.295674] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:06.397 passed 00:06:06.397 Test: generate copy: DIF generated, GUARD check ...passed 00:06:06.397 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:06.397 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:06.397 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:06.397 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:06.397 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:06.397 Test: generate copy: iovecs-len validate ...[2024-05-14 11:42:33.295843] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:06.397 passed 00:06:06.397 Test: generate copy: buffer alignment validate ...passed 00:06:06.397 00:06:06.397 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.397 suites 1 1 n/a 0 0 00:06:06.397 tests 20 20 20 0 0 00:06:06.397 asserts 204 204 204 0 n/a 00:06:06.397 00:06:06.397 Elapsed time = 0.002 seconds 00:06:06.397 00:06:06.397 real 0m0.393s 00:06:06.397 user 0m0.560s 00:06:06.397 sys 0m0.146s 00:06:06.397 11:42:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.397 11:42:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:06.397 ************************************ 00:06:06.397 END TEST accel_dif_functional_tests 00:06:06.397 ************************************ 00:06:06.657 00:06:06.657 real 0m31.497s 00:06:06.657 user 0m34.358s 00:06:06.657 sys 0m5.012s 00:06:06.657 11:42:33 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.657 11:42:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.657 ************************************ 00:06:06.657 END TEST accel 00:06:06.657 ************************************ 00:06:06.657 11:42:33 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:06.657 11:42:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.657 11:42:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.657 11:42:33 -- common/autotest_common.sh@10 -- # set +x 00:06:06.657 ************************************ 00:06:06.657 START TEST accel_rpc 00:06:06.657 ************************************ 00:06:06.657 11:42:33 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:06.657 * Looking for test storage... 00:06:06.657 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:06.657 11:42:33 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:06.657 11:42:33 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3631718 00:06:06.657 11:42:33 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3631718 00:06:06.657 11:42:33 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:06.657 11:42:33 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3631718 ']' 00:06:06.657 11:42:33 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.657 11:42:33 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:06.657 11:42:33 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.657 11:42:33 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:06.657 11:42:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.657 [2024-05-14 11:42:33.713277] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:06.657 [2024-05-14 11:42:33.713366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631718 ] 00:06:06.915 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.916 [2024-05-14 11:42:33.783729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.916 [2024-05-14 11:42:33.861823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.482 11:42:34 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:07.482 11:42:34 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:07.482 11:42:34 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:07.482 11:42:34 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:07.482 11:42:34 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:07.482 11:42:34 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:07.482 11:42:34 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:07.482 11:42:34 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.482 11:42:34 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.482 11:42:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.482 ************************************ 00:06:07.482 START TEST accel_assign_opcode 00:06:07.482 ************************************ 00:06:07.482 11:42:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:06:07.482 11:42:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:07.482 11:42:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.482 11:42:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:07.482 [2024-05-14 11:42:34.571947] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:07.740 [2024-05-14 11:42:34.579956] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.740 software 00:06:07.740 00:06:07.740 real 0m0.233s 00:06:07.740 user 0m0.043s 00:06:07.740 sys 0m0.013s 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.740 11:42:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:07.740 ************************************ 00:06:07.740 END TEST accel_assign_opcode 00:06:07.740 ************************************ 00:06:07.998 11:42:34 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3631718 00:06:07.998 11:42:34 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3631718 ']' 00:06:07.998 11:42:34 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3631718 00:06:07.998 11:42:34 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:06:07.998 11:42:34 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:07.998 11:42:34 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3631718 00:06:07.998 11:42:34 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:07.998 11:42:34 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:07.998 11:42:34 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3631718' 00:06:07.998 killing process with pid 3631718 00:06:07.998 11:42:34 accel_rpc -- common/autotest_common.sh@965 -- # kill 3631718 00:06:07.998 11:42:34 accel_rpc -- common/autotest_common.sh@970 -- # wait 3631718 00:06:08.256 00:06:08.256 real 0m1.616s 00:06:08.256 user 0m1.645s 00:06:08.256 sys 0m0.487s 00:06:08.256 11:42:35 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.256 11:42:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.256 ************************************ 00:06:08.256 END TEST accel_rpc 00:06:08.256 ************************************ 00:06:08.256 11:42:35 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:08.256 11:42:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:08.256 11:42:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.256 11:42:35 -- common/autotest_common.sh@10 -- # set +x 00:06:08.256 ************************************ 00:06:08.256 START TEST app_cmdline 00:06:08.256 ************************************ 00:06:08.256 11:42:35 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:06:08.515 * Looking for test storage... 00:06:08.515 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:08.515 11:42:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:08.515 11:42:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3632097 00:06:08.515 11:42:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3632097 00:06:08.515 11:42:35 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:08.515 11:42:35 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3632097 ']' 00:06:08.515 11:42:35 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.516 11:42:35 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:08.516 11:42:35 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.516 11:42:35 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:08.516 11:42:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.516 [2024-05-14 11:42:35.387370] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:08.516 [2024-05-14 11:42:35.387442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3632097 ] 00:06:08.516 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.516 [2024-05-14 11:42:35.454676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.516 [2024-05-14 11:42:35.528173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:09.452 11:42:36 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:09.452 { 00:06:09.452 "version": "SPDK v24.05-pre git sha1 b68ae4fb9", 00:06:09.452 "fields": { 00:06:09.452 "major": 24, 00:06:09.452 "minor": 5, 00:06:09.452 "patch": 0, 00:06:09.452 "suffix": "-pre", 00:06:09.452 "commit": "b68ae4fb9" 00:06:09.452 } 00:06:09.452 } 00:06:09.452 11:42:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:09.452 11:42:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:09.452 11:42:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:09.452 11:42:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:09.452 11:42:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.452 11:42:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.452 11:42:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.452 11:42:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:09.452 11:42:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:09.452 11:42:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:06:09.452 11:42:36 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:09.711 request: 00:06:09.711 { 00:06:09.711 "method": "env_dpdk_get_mem_stats", 00:06:09.711 "req_id": 1 00:06:09.711 } 00:06:09.711 Got JSON-RPC error response 00:06:09.711 response: 00:06:09.711 { 00:06:09.711 "code": -32601, 00:06:09.711 "message": "Method not found" 00:06:09.711 } 00:06:09.711 11:42:36 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:09.711 11:42:36 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:09.711 11:42:36 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:09.711 11:42:36 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:09.711 11:42:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3632097 00:06:09.711 11:42:36 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3632097 ']' 00:06:09.711 11:42:36 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3632097 00:06:09.711 11:42:36 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:09.711 11:42:36 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:09.711 11:42:36 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3632097 00:06:09.711 11:42:36 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:09.711 11:42:36 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:09.711 11:42:36 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3632097' 00:06:09.711 killing process with pid 3632097 00:06:09.711 11:42:36 app_cmdline -- common/autotest_common.sh@965 -- # kill 3632097 00:06:09.711 11:42:36 app_cmdline -- common/autotest_common.sh@970 -- # wait 3632097 00:06:09.970 00:06:09.970 real 0m1.667s 00:06:09.970 user 0m1.951s 00:06:09.970 sys 0m0.466s 00:06:09.970 11:42:36 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.970 11:42:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.970 ************************************ 00:06:09.970 END TEST app_cmdline 00:06:09.970 ************************************ 00:06:09.970 11:42:36 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:09.970 11:42:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.970 11:42:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.970 11:42:36 -- common/autotest_common.sh@10 -- # set +x 00:06:09.970 ************************************ 00:06:09.970 START TEST version 00:06:09.970 ************************************ 00:06:09.970 11:42:37 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:06:10.229 * Looking for test storage... 00:06:10.229 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:10.229 11:42:37 version -- app/version.sh@17 -- # get_header_version major 00:06:10.229 11:42:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:10.229 11:42:37 version -- app/version.sh@14 -- # cut -f2 00:06:10.229 11:42:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.229 11:42:37 version -- app/version.sh@17 -- # major=24 00:06:10.229 11:42:37 version -- app/version.sh@18 -- # get_header_version minor 00:06:10.229 11:42:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:10.229 11:42:37 version -- app/version.sh@14 -- # cut -f2 00:06:10.229 11:42:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.229 11:42:37 version -- app/version.sh@18 -- # minor=5 00:06:10.229 11:42:37 version -- app/version.sh@19 -- # get_header_version patch 00:06:10.229 11:42:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:10.229 11:42:37 version -- app/version.sh@14 -- # cut -f2 00:06:10.229 11:42:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.229 11:42:37 version -- app/version.sh@19 -- # patch=0 00:06:10.229 11:42:37 version -- app/version.sh@20 -- # get_header_version suffix 00:06:10.229 11:42:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:06:10.229 11:42:37 version -- app/version.sh@14 -- # cut -f2 00:06:10.229 11:42:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.229 11:42:37 version -- app/version.sh@20 -- # suffix=-pre 00:06:10.229 11:42:37 version -- app/version.sh@22 -- # version=24.5 00:06:10.229 11:42:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:10.229 11:42:37 version -- app/version.sh@28 -- # version=24.5rc0 00:06:10.229 11:42:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:10.229 11:42:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:10.229 11:42:37 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:10.229 11:42:37 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:10.229 00:06:10.229 real 0m0.181s 00:06:10.229 user 0m0.090s 00:06:10.229 sys 0m0.133s 00:06:10.229 11:42:37 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.229 11:42:37 version -- common/autotest_common.sh@10 -- # set +x 00:06:10.229 ************************************ 00:06:10.229 END TEST version 00:06:10.229 ************************************ 00:06:10.229 11:42:37 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:10.229 11:42:37 -- spdk/autotest.sh@194 -- # uname -s 00:06:10.229 11:42:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:10.229 11:42:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:10.229 11:42:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:10.229 11:42:37 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:10.229 11:42:37 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:10.229 11:42:37 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:10.229 11:42:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.229 11:42:37 -- common/autotest_common.sh@10 -- # set +x 00:06:10.229 11:42:37 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:10.229 11:42:37 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:10.229 11:42:37 -- spdk/autotest.sh@275 -- # '[' 0 -eq 1 ']' 00:06:10.229 11:42:37 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:06:10.229 11:42:37 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:06:10.229 11:42:37 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:06:10.229 11:42:37 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:06:10.229 11:42:37 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:06:10.229 11:42:37 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:06:10.230 11:42:37 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:06:10.230 11:42:37 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:06:10.230 11:42:37 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:06:10.230 11:42:37 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:06:10.230 11:42:37 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:06:10.230 11:42:37 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:06:10.230 11:42:37 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:06:10.230 11:42:37 -- spdk/autotest.sh@367 -- # [[ 1 -eq 1 ]] 00:06:10.230 11:42:37 -- spdk/autotest.sh@368 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:10.230 11:42:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.230 11:42:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.230 11:42:37 -- common/autotest_common.sh@10 -- # set +x 00:06:10.489 ************************************ 00:06:10.489 START TEST llvm_fuzz 00:06:10.489 ************************************ 00:06:10.489 11:42:37 llvm_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:06:10.489 * Looking for test storage... 00:06:10.489 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:06:10.489 11:42:37 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:06:10.489 11:42:37 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:06:10.489 11:42:37 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:06:10.489 11:42:37 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:06:10.489 11:42:37 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:06:10.489 11:42:37 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:06:10.489 11:42:37 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:06:10.489 11:42:37 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:06:10.489 11:42:37 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:06:10.489 11:42:37 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:06:10.489 11:42:37 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:06:10.489 11:42:37 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:10.489 11:42:37 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:10.489 11:42:37 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:10.489 11:42:37 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:10.489 11:42:37 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:06:10.489 11:42:37 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:06:10.489 11:42:37 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:10.489 11:42:37 llvm_fuzz -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.489 11:42:37 llvm_fuzz -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.489 11:42:37 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:06:10.489 ************************************ 00:06:10.489 START TEST nvmf_fuzz 00:06:10.489 ************************************ 00:06:10.489 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:10.489 * Looking for test storage... 00:06:10.752 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:10.752 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:10.753 #define SPDK_CONFIG_H 00:06:10.753 #define SPDK_CONFIG_APPS 1 00:06:10.753 #define SPDK_CONFIG_ARCH native 00:06:10.753 #undef SPDK_CONFIG_ASAN 00:06:10.753 #undef SPDK_CONFIG_AVAHI 00:06:10.753 #undef SPDK_CONFIG_CET 00:06:10.753 #define SPDK_CONFIG_COVERAGE 1 00:06:10.753 #define SPDK_CONFIG_CROSS_PREFIX 00:06:10.753 #undef SPDK_CONFIG_CRYPTO 00:06:10.753 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:10.753 #undef SPDK_CONFIG_CUSTOMOCF 00:06:10.753 #undef SPDK_CONFIG_DAOS 00:06:10.753 #define SPDK_CONFIG_DAOS_DIR 00:06:10.753 #define SPDK_CONFIG_DEBUG 1 00:06:10.753 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:10.753 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:10.753 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:10.753 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:10.753 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:10.753 #undef SPDK_CONFIG_DPDK_UADK 00:06:10.753 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:10.753 #define SPDK_CONFIG_EXAMPLES 1 00:06:10.753 #undef SPDK_CONFIG_FC 00:06:10.753 #define SPDK_CONFIG_FC_PATH 00:06:10.753 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:10.753 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:10.753 #undef SPDK_CONFIG_FUSE 00:06:10.753 #define SPDK_CONFIG_FUZZER 1 00:06:10.753 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:06:10.753 #undef SPDK_CONFIG_GOLANG 00:06:10.753 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:10.753 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:10.753 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:10.753 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:10.753 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:10.753 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:10.753 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:10.753 #define SPDK_CONFIG_IDXD 1 00:06:10.753 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:10.753 #undef SPDK_CONFIG_IPSEC_MB 00:06:10.753 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:10.753 #define SPDK_CONFIG_ISAL 1 00:06:10.753 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:10.753 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:10.753 #define SPDK_CONFIG_LIBDIR 00:06:10.753 #undef SPDK_CONFIG_LTO 00:06:10.753 #define SPDK_CONFIG_MAX_LCORES 00:06:10.753 #define SPDK_CONFIG_NVME_CUSE 1 00:06:10.753 #undef SPDK_CONFIG_OCF 00:06:10.753 #define SPDK_CONFIG_OCF_PATH 00:06:10.753 #define SPDK_CONFIG_OPENSSL_PATH 00:06:10.753 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:10.753 #define SPDK_CONFIG_PGO_DIR 00:06:10.753 #undef SPDK_CONFIG_PGO_USE 00:06:10.753 #define SPDK_CONFIG_PREFIX /usr/local 00:06:10.753 #undef SPDK_CONFIG_RAID5F 00:06:10.753 #undef SPDK_CONFIG_RBD 00:06:10.753 #define SPDK_CONFIG_RDMA 1 00:06:10.753 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:10.753 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:10.753 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:10.753 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:10.753 #undef SPDK_CONFIG_SHARED 00:06:10.753 #undef SPDK_CONFIG_SMA 00:06:10.753 #define SPDK_CONFIG_TESTS 1 00:06:10.753 #undef SPDK_CONFIG_TSAN 00:06:10.753 #define SPDK_CONFIG_UBLK 1 00:06:10.753 #define SPDK_CONFIG_UBSAN 1 00:06:10.753 #undef SPDK_CONFIG_UNIT_TESTS 00:06:10.753 #undef SPDK_CONFIG_URING 00:06:10.753 #define SPDK_CONFIG_URING_PATH 00:06:10.753 #undef SPDK_CONFIG_URING_ZNS 00:06:10.753 #undef SPDK_CONFIG_USDT 00:06:10.753 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:10.753 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:10.753 #define SPDK_CONFIG_VFIO_USER 1 00:06:10.753 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:10.753 #define SPDK_CONFIG_VHOST 1 00:06:10.753 #define SPDK_CONFIG_VIRTIO 1 00:06:10.753 #undef SPDK_CONFIG_VTUNE 00:06:10.753 #define SPDK_CONFIG_VTUNE_DIR 00:06:10.753 #define SPDK_CONFIG_WERROR 1 00:06:10.753 #define SPDK_CONFIG_WPDK_DIR 00:06:10.753 #undef SPDK_CONFIG_XNVME 00:06:10.753 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.753 11:42:37 llvm_fuzz.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # uname -s 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@57 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@61 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@63 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@65 -- # : 1 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@67 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@69 -- # : 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@71 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@73 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@75 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@77 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@79 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@81 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@83 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@85 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@87 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@89 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@91 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@93 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@95 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@97 -- # : 1 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@99 -- # : 1 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@101 -- # : rdma 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@103 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@105 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@107 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@109 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@111 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@113 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@115 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@117 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@119 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@121 -- # : 1 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@123 -- # : 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@125 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@127 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@129 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@131 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@133 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@135 -- # : 0 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@137 -- # : 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@139 -- # : true 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:10.754 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@141 -- # : 0 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@143 -- # : 0 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@145 -- # : 0 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@147 -- # : 0 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@149 -- # : 0 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@151 -- # : 0 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@153 -- # : 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@155 -- # : 0 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@157 -- # : 0 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@159 -- # : 0 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@161 -- # : 0 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@163 -- # : 0 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@166 -- # : 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@168 -- # : 0 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@170 -- # : 0 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@199 -- # cat 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # export valgrind= 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # valgrind= 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@268 -- # uname -s 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:10.755 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@278 -- # MAKE=make 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j112 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@298 -- # TEST_MODE= 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@317 -- # [[ -z 3632734 ]] 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@317 -- # kill -0 3632734 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@330 -- # local mount target_dir 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.IkFenK 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.IkFenK/tests/nvmf /tmp/spdk.IkFenK 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@326 -- # df -T 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=968232960 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4316196864 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=52959305728 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742305280 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=8782999552 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30866440192 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871150592 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=12342489088 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348461056 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=5971968 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30869934080 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871154688 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=1220608 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=6174224384 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174228480 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:06:10.756 * Looking for test storage... 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@367 -- # local target_space new_size 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@371 -- # mount=/ 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@373 -- # target_space=52959305728 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # new_size=10997592064 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:10.756 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@388 -- # return 0 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1683 -- # true 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@8 -- # pids=() 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@70 -- # local time=1 00:06:10.756 11:42:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:10.757 11:42:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:10.757 [2024-05-14 11:42:37.800445] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:10.757 [2024-05-14 11:42:37.800518] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3632774 ] 00:06:10.757 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.016 [2024-05-14 11:42:38.052845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.275 [2024-05-14 11:42:38.145648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.275 [2024-05-14 11:42:38.204411] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.275 [2024-05-14 11:42:38.220363] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:11.275 [2024-05-14 11:42:38.220785] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:11.275 INFO: Running with entropic power schedule (0xFF, 100). 00:06:11.275 INFO: Seed: 1780385604 00:06:11.275 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:11.275 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:11.275 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:11.275 INFO: A corpus is not provided, starting from an empty corpus 00:06:11.275 #2 INITED exec/s: 0 rss: 63Mb 00:06:11.275 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:11.275 This may also happen if the target rejected all inputs we tried so far 00:06:11.275 [2024-05-14 11:42:38.291422] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.275 [2024-05-14 11:42:38.291460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.533 NEW_FUNC[1/685]: 0x481d20 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:11.533 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:11.533 #17 NEW cov: 11767 ft: 11768 corp: 2/114b lim: 320 exec/s: 0 rss: 70Mb L: 113/113 MS: 5 CrossOver-InsertByte-EraseBytes-EraseBytes-InsertRepeatedBytes- 00:06:11.792 [2024-05-14 11:42:38.641584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e6) qid:0 cid:4 nsid:3a3a3a3a cdw10:3a3a3a3a cdw11:3a3a3a3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x3a3a3a3a3a3a3a3a 00:06:11.792 [2024-05-14 11:42:38.641622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.792 #21 NEW cov: 11919 ft: 12588 corp: 3/214b lim: 320 exec/s: 0 rss: 71Mb L: 100/113 MS: 4 ChangeByte-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:06:11.792 [2024-05-14 11:42:38.691581] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.792 [2024-05-14 11:42:38.691614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.792 #25 NEW cov: 11925 ft: 12873 corp: 4/329b lim: 320 exec/s: 0 rss: 71Mb L: 115/115 MS: 4 CopyPart-InsertByte-EraseBytes-CrossOver- 00:06:11.792 [2024-05-14 11:42:38.741786] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.792 [2024-05-14 11:42:38.741817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.792 #26 NEW cov: 12010 ft: 13131 corp: 5/420b lim: 320 exec/s: 0 rss: 71Mb L: 91/115 MS: 1 EraseBytes- 00:06:11.792 [2024-05-14 11:42:38.801854] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.792 [2024-05-14 11:42:38.801884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.792 #27 NEW cov: 12010 ft: 13285 corp: 6/511b lim: 320 exec/s: 0 rss: 71Mb L: 91/115 MS: 1 ShuffleBytes- 00:06:11.792 [2024-05-14 11:42:38.862096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e6) qid:0 cid:4 nsid:3a3a3a3a cdw10:3a3a3a3a cdw11:3a3a3a3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x3a3a3a3a3a3a3a3a 00:06:11.792 [2024-05-14 11:42:38.862125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.051 #28 NEW cov: 12010 ft: 13373 corp: 7/611b lim: 320 exec/s: 0 rss: 71Mb L: 100/115 MS: 1 CopyPart- 00:06:12.051 [2024-05-14 11:42:38.911930] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.051 [2024-05-14 11:42:38.911959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.051 #29 NEW cov: 12010 ft: 13596 corp: 8/726b lim: 320 exec/s: 0 rss: 71Mb L: 115/115 MS: 1 CrossOver- 00:06:12.051 [2024-05-14 11:42:38.962361] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.051 [2024-05-14 11:42:38.962396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.051 #30 NEW cov: 12010 ft: 13677 corp: 9/839b lim: 320 exec/s: 0 rss: 71Mb L: 113/115 MS: 1 ChangeByte- 00:06:12.051 [2024-05-14 11:42:39.022654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.051 [2024-05-14 11:42:39.022683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.051 #31 NEW cov: 12010 ft: 13740 corp: 10/960b lim: 320 exec/s: 0 rss: 71Mb L: 121/121 MS: 1 CMP- DE: "\376\003\000\000\000\000\000\000"- 00:06:12.051 [2024-05-14 11:42:39.072767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e6) qid:0 cid:4 nsid:3a3a3a3a cdw10:3a3a3a3a cdw11:3a3a3a3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x3a3a3a3a083a3a3a 00:06:12.051 [2024-05-14 11:42:39.072794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.051 #32 NEW cov: 12010 ft: 13798 corp: 11/1061b lim: 320 exec/s: 0 rss: 72Mb L: 101/121 MS: 1 InsertByte- 00:06:12.051 [2024-05-14 11:42:39.132962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.051 [2024-05-14 11:42:39.132990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.310 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:12.310 #33 NEW cov: 12033 ft: 13857 corp: 12/1152b lim: 320 exec/s: 0 rss: 72Mb L: 91/121 MS: 1 CrossOver- 00:06:12.310 [2024-05-14 11:42:39.183127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e6) qid:0 cid:4 nsid:3a3a3a3a cdw10:3a3a3a3a cdw11:3a3a3a3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x3a3a3a3a3a3a3a3a 00:06:12.310 [2024-05-14 11:42:39.183154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.310 #34 NEW cov: 12033 ft: 13876 corp: 13/1264b lim: 320 exec/s: 0 rss: 72Mb L: 112/121 MS: 1 InsertRepeatedBytes- 00:06:12.310 [2024-05-14 11:42:39.233341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e6) qid:0 cid:4 nsid:3a3a3a3a cdw10:3a3a3a3a cdw11:3a3a3a3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x3a3a3a3a3a3a3a3a 00:06:12.310 [2024-05-14 11:42:39.233368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.310 #35 NEW cov: 12033 ft: 13894 corp: 14/1372b lim: 320 exec/s: 0 rss: 72Mb L: 108/121 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:06:12.310 [2024-05-14 11:42:39.283886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e6) qid:0 cid:4 nsid:3a3a3a3a cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:12.310 [2024-05-14 11:42:39.283914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.310 [2024-05-14 11:42:39.284063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:12.310 [2024-05-14 11:42:39.284084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.310 [2024-05-14 11:42:39.284224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:3a3a3a3a cdw11:3a3a3a3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x3a3a3a3a3a3a3a3a 00:06:12.310 [2024-05-14 11:42:39.284242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.310 NEW_FUNC[1/1]: 0x1334440 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2038 00:06:12.310 #36 NEW cov: 12064 ft: 14178 corp: 15/1612b lim: 320 exec/s: 36 rss: 72Mb L: 240/240 MS: 1 InsertRepeatedBytes- 00:06:12.310 [2024-05-14 11:42:39.353713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e6) qid:0 cid:4 nsid:3a3a3a3a cdw10:3a3a3a3a cdw11:3a3a3a3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x3a3a3a3a3a3a3a3a 00:06:12.310 [2024-05-14 11:42:39.353740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.310 #37 NEW cov: 12064 ft: 14202 corp: 16/1712b lim: 320 exec/s: 37 rss: 72Mb L: 100/240 MS: 1 ChangeBinInt- 00:06:12.569 [2024-05-14 11:42:39.403917] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.569 [2024-05-14 11:42:39.403947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.569 #38 NEW cov: 12064 ft: 14241 corp: 17/1794b lim: 320 exec/s: 38 rss: 72Mb L: 82/240 MS: 1 EraseBytes- 00:06:12.569 [2024-05-14 11:42:39.454017] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.569 [2024-05-14 11:42:39.454045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.569 #39 NEW cov: 12064 ft: 14255 corp: 18/1885b lim: 320 exec/s: 39 rss: 72Mb L: 91/240 MS: 1 ChangeBinInt- 00:06:12.569 [2024-05-14 11:42:39.504280] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.569 [2024-05-14 11:42:39.504308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.569 #40 NEW cov: 12064 ft: 14274 corp: 19/1976b lim: 320 exec/s: 40 rss: 72Mb L: 91/240 MS: 1 ShuffleBytes- 00:06:12.569 [2024-05-14 11:42:39.554325] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.569 [2024-05-14 11:42:39.554352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.569 #41 NEW cov: 12064 ft: 14288 corp: 20/2097b lim: 320 exec/s: 41 rss: 72Mb L: 121/240 MS: 1 CMP- DE: "5\322kfE=\205\000"- 00:06:12.569 [2024-05-14 11:42:39.593975] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:fa000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.569 [2024-05-14 11:42:39.594005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.569 #42 NEW cov: 12064 ft: 14332 corp: 21/2218b lim: 320 exec/s: 42 rss: 72Mb L: 121/240 MS: 1 ChangeBinInt- 00:06:12.569 [2024-05-14 11:42:39.644539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.569 [2024-05-14 11:42:39.644567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.827 #43 NEW cov: 12064 ft: 14379 corp: 22/2317b lim: 320 exec/s: 43 rss: 72Mb L: 99/240 MS: 1 PersAutoDict- DE: "5\322kfE=\205\000"- 00:06:12.827 [2024-05-14 11:42:39.694763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e6) qid:0 cid:4 nsid:3a3a3a0d cdw10:3a3a3a3a cdw11:3a3a3a3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x3a3a3a3a3a3a3a3a 00:06:12.827 [2024-05-14 11:42:39.694790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.827 #44 NEW cov: 12064 ft: 14395 corp: 23/2417b lim: 320 exec/s: 44 rss: 72Mb L: 100/240 MS: 1 ChangeByte- 00:06:12.827 [2024-05-14 11:42:39.744980] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.827 [2024-05-14 11:42:39.745008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.827 #45 NEW cov: 12064 ft: 14439 corp: 24/2538b lim: 320 exec/s: 45 rss: 73Mb L: 121/240 MS: 1 PersAutoDict- DE: "\376\003\000\000\000\000\000\000"- 00:06:12.827 [2024-05-14 11:42:39.795542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.827 [2024-05-14 11:42:39.795572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.827 [2024-05-14 11:42:39.795693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.827 [2024-05-14 11:42:39.795712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.827 [2024-05-14 11:42:39.795831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.827 [2024-05-14 11:42:39.795847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.827 #46 NEW cov: 12066 ft: 14490 corp: 25/2730b lim: 320 exec/s: 46 rss: 73Mb L: 192/240 MS: 1 CopyPart- 00:06:12.827 [2024-05-14 11:42:39.855752] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.827 [2024-05-14 11:42:39.855782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.827 [2024-05-14 11:42:39.855901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.827 [2024-05-14 11:42:39.855920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.827 [2024-05-14 11:42:39.856039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:12.827 [2024-05-14 11:42:39.856057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.827 #47 NEW cov: 12066 ft: 14527 corp: 26/2923b lim: 320 exec/s: 47 rss: 73Mb L: 193/240 MS: 1 InsertByte- 00:06:12.827 [2024-05-14 11:42:39.915524] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:12.827 [2024-05-14 11:42:39.915552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.086 #48 NEW cov: 12066 ft: 14550 corp: 27/3014b lim: 320 exec/s: 48 rss: 73Mb L: 91/240 MS: 1 ShuffleBytes- 00:06:13.086 [2024-05-14 11:42:39.975693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (54) qid:0 cid:4 nsid:54545454 cdw10:54545454 cdw11:54545454 00:06:13.086 [2024-05-14 11:42:39.975723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.086 #49 NEW cov: 12066 ft: 14567 corp: 28/3106b lim: 320 exec/s: 49 rss: 73Mb L: 92/240 MS: 1 InsertRepeatedBytes- 00:06:13.086 [2024-05-14 11:42:40.016008] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xfb00000000000000 00:06:13.086 [2024-05-14 11:42:40.016038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.086 #50 NEW cov: 12066 ft: 14577 corp: 29/3197b lim: 320 exec/s: 50 rss: 73Mb L: 91/240 MS: 1 ChangeBinInt- 00:06:13.086 [2024-05-14 11:42:40.066108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (54) qid:0 cid:4 nsid:54545454 cdw10:54545454 cdw11:54545454 00:06:13.086 [2024-05-14 11:42:40.066136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.086 #51 NEW cov: 12066 ft: 14620 corp: 30/3289b lim: 320 exec/s: 51 rss: 73Mb L: 92/240 MS: 1 ChangeBit- 00:06:13.086 [2024-05-14 11:42:40.126437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e6) qid:0 cid:4 nsid:3a3a3a3a cdw10:3a3a3a3a cdw11:3a3a3a3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x3a3a3a3a3a3a3a3a 00:06:13.086 [2024-05-14 11:42:40.126469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.086 [2024-05-14 11:42:40.126601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (3a) qid:0 cid:5 nsid:3a3a3a3a cdw10:3a3a3aff cdw11:3a3a3a3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x3a3a3a3a3a3a3a3a 00:06:13.086 [2024-05-14 11:42:40.126620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.086 #52 NEW cov: 12066 ft: 14833 corp: 31/3475b lim: 320 exec/s: 52 rss: 73Mb L: 186/240 MS: 1 CopyPart- 00:06:13.345 [2024-05-14 11:42:40.186386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (e6) qid:0 cid:4 nsid:3a3a3a3a cdw10:3a3a3a3a cdw11:3a3a3a3a SGL TRANSPORT DATA BLOCK TRANSPORT 0x3a3a3a3a3a3a3a3a 00:06:13.345 [2024-05-14 11:42:40.186416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.345 #53 NEW cov: 12066 ft: 14869 corp: 32/3539b lim: 320 exec/s: 53 rss: 73Mb L: 64/240 MS: 1 EraseBytes- 00:06:13.345 [2024-05-14 11:42:40.236599] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.346 [2024-05-14 11:42:40.236628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.346 #59 NEW cov: 12066 ft: 14911 corp: 33/3654b lim: 320 exec/s: 29 rss: 73Mb L: 115/240 MS: 1 ChangeByte- 00:06:13.346 #59 DONE cov: 12066 ft: 14911 corp: 33/3654b lim: 320 exec/s: 29 rss: 73Mb 00:06:13.346 ###### Recommended dictionary. ###### 00:06:13.346 "\376\003\000\000\000\000\000\000" # Uses: 1 00:06:13.346 "\377\377\377\377\377\377\377\377" # Uses: 1 00:06:13.346 "5\322kfE=\205\000" # Uses: 1 00:06:13.346 ###### End of recommended dictionary. ###### 00:06:13.346 Done 59 runs in 2 second(s) 00:06:13.346 [2024-05-14 11:42:40.265900] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:13.346 11:42:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:13.604 [2024-05-14 11:42:40.437600] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:13.604 [2024-05-14 11:42:40.437694] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3633312 ] 00:06:13.604 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.604 [2024-05-14 11:42:40.689651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.863 [2024-05-14 11:42:40.784338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.863 [2024-05-14 11:42:40.843831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.863 [2024-05-14 11:42:40.859777] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:13.863 [2024-05-14 11:42:40.860183] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:13.863 INFO: Running with entropic power schedule (0xFF, 100). 00:06:13.863 INFO: Seed: 124398209 00:06:13.863 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:13.863 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:13.863 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:13.863 INFO: A corpus is not provided, starting from an empty corpus 00:06:13.863 #2 INITED exec/s: 0 rss: 63Mb 00:06:13.863 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:13.863 This may also happen if the target rejected all inputs we tried so far 00:06:13.863 [2024-05-14 11:42:40.908667] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:13.863 [2024-05-14 11:42:40.908785] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:13.863 [2024-05-14 11:42:40.908989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.863 [2024-05-14 11:42:40.909019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.863 [2024-05-14 11:42:40.909074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:13.863 [2024-05-14 11:42:40.909089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.431 NEW_FUNC[1/684]: 0x482620 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:14.431 NEW_FUNC[2/684]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:14.431 #5 NEW cov: 11863 ft: 11860 corp: 2/14b lim: 30 exec/s: 0 rss: 70Mb L: 13/13 MS: 3 CopyPart-CMP-CMP- DE: "\377\377\377\377"-"\001\000\000\000\000\000\000\000"- 00:06:14.431 [2024-05-14 11:42:41.239454] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:14.431 [2024-05-14 11:42:41.239777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.431 [2024-05-14 11:42:41.239807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.431 [2024-05-14 11:42:41.239861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.431 [2024-05-14 11:42:41.239876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.431 NEW_FUNC[1/2]: 0x1584d70 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1494 00:06:14.431 NEW_FUNC[2/2]: 0x1a34960 in sock_group_impl_poll_count /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:712 00:06:14.431 #6 NEW cov: 12020 ft: 12529 corp: 3/27b lim: 30 exec/s: 0 rss: 70Mb L: 13/13 MS: 1 CopyPart- 00:06:14.431 [2024-05-14 11:42:41.289522] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:14.431 [2024-05-14 11:42:41.289634] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:06:14.431 [2024-05-14 11:42:41.289834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.431 [2024-05-14 11:42:41.289861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.431 [2024-05-14 11:42:41.289916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000021 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.431 [2024-05-14 11:42:41.289930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.431 #7 NEW cov: 12026 ft: 12884 corp: 4/41b lim: 30 exec/s: 0 rss: 70Mb L: 14/14 MS: 1 InsertByte- 00:06:14.431 [2024-05-14 11:42:41.329596] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261568) > buf size (4096) 00:06:14.431 [2024-05-14 11:42:41.329910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff6f0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.431 [2024-05-14 11:42:41.329935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.431 [2024-05-14 11:42:41.329991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.431 [2024-05-14 11:42:41.330004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.431 #8 NEW cov: 12111 ft: 13175 corp: 5/54b lim: 30 exec/s: 0 rss: 70Mb L: 13/14 MS: 1 ChangeByte- 00:06:14.431 [2024-05-14 11:42:41.379856] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786432) > buf size (4096) 00:06:14.431 [2024-05-14 11:42:41.380063] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:06:14.431 [2024-05-14 11:42:41.380263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0201 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.431 [2024-05-14 11:42:41.380289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.431 [2024-05-14 11:42:41.380345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.431 [2024-05-14 11:42:41.380359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.431 [2024-05-14 11:42:41.380416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:21008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.431 [2024-05-14 11:42:41.380430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.431 #9 NEW cov: 12111 ft: 13492 corp: 6/72b lim: 30 exec/s: 0 rss: 71Mb L: 18/18 MS: 1 CMP- DE: "\036\000\000\000"- 00:06:14.431 [2024-05-14 11:42:41.429886] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (11264) > buf size (4096) 00:06:14.431 [2024-05-14 11:42:41.430005] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:14.431 [2024-05-14 11:42:41.430226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.431 [2024-05-14 11:42:41.430258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.431 [2024-05-14 11:42:41.430314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.431 [2024-05-14 11:42:41.430328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.431 #10 NEW cov: 12111 ft: 13571 corp: 7/85b lim: 30 exec/s: 0 rss: 71Mb L: 13/18 MS: 1 CrossOver- 00:06:14.431 [2024-05-14 11:42:41.469952] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (273408) > buf size (4096) 00:06:14.431 [2024-05-14 11:42:41.470172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff816f cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.431 [2024-05-14 11:42:41.470196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.431 #11 NEW cov: 12111 ft: 14003 corp: 8/92b lim: 30 exec/s: 0 rss: 71Mb L: 7/18 MS: 1 CrossOver- 00:06:14.431 [2024-05-14 11:42:41.510100] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:14.431 [2024-05-14 11:42:41.510216] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (62464) > len (4) 00:06:14.431 [2024-05-14 11:42:41.510416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.431 [2024-05-14 11:42:41.510444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.431 [2024-05-14 11:42:41.510499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.431 [2024-05-14 11:42:41.510515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.691 #12 NEW cov: 12117 ft: 14028 corp: 9/106b lim: 30 exec/s: 0 rss: 71Mb L: 14/18 MS: 1 InsertByte- 00:06:14.691 [2024-05-14 11:42:41.550245] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:14.691 [2024-05-14 11:42:41.550362] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (62464) > len (4) 00:06:14.691 [2024-05-14 11:42:41.550575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.691 [2024-05-14 11:42:41.550601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.691 [2024-05-14 11:42:41.550658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.691 [2024-05-14 11:42:41.550672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.691 #13 NEW cov: 12117 ft: 14078 corp: 10/120b lim: 30 exec/s: 0 rss: 71Mb L: 14/18 MS: 1 ShuffleBytes- 00:06:14.691 [2024-05-14 11:42:41.590331] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:14.691 [2024-05-14 11:42:41.590679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.691 [2024-05-14 11:42:41.590706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.691 [2024-05-14 11:42:41.590762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.691 [2024-05-14 11:42:41.590776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.691 #14 NEW cov: 12117 ft: 14172 corp: 11/133b lim: 30 exec/s: 0 rss: 71Mb L: 13/18 MS: 1 CrossOver- 00:06:14.691 [2024-05-14 11:42:41.630525] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:14.691 [2024-05-14 11:42:41.630833] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:06:14.691 [2024-05-14 11:42:41.631049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.691 [2024-05-14 11:42:41.631075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.691 [2024-05-14 11:42:41.631131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.691 [2024-05-14 11:42:41.631144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.691 [2024-05-14 11:42:41.631196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.691 [2024-05-14 11:42:41.631209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.691 [2024-05-14 11:42:41.631262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.691 [2024-05-14 11:42:41.631275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.691 #15 NEW cov: 12117 ft: 14710 corp: 12/157b lim: 30 exec/s: 0 rss: 71Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:06:14.691 [2024-05-14 11:42:41.671004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.691 [2024-05-14 11:42:41.671030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.691 [2024-05-14 11:42:41.671086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.691 [2024-05-14 11:42:41.671100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.691 [2024-05-14 11:42:41.671155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.691 [2024-05-14 11:42:41.671168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.691 #19 NEW cov: 12117 ft: 14745 corp: 13/178b lim: 30 exec/s: 0 rss: 71Mb L: 21/24 MS: 4 InsertRepeatedBytes-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:06:14.691 [2024-05-14 11:42:41.710730] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:14.691 [2024-05-14 11:42:41.710847] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:14.691 [2024-05-14 11:42:41.711057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.691 [2024-05-14 11:42:41.711083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.691 [2024-05-14 11:42:41.711137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:20000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.691 [2024-05-14 11:42:41.711150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.691 #20 NEW cov: 12117 ft: 14826 corp: 14/191b lim: 30 exec/s: 0 rss: 71Mb L: 13/24 MS: 1 ChangeBit- 00:06:14.691 [2024-05-14 11:42:41.750800] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:14.691 [2024-05-14 11:42:41.750920] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (54272) > len (4) 00:06:14.691 [2024-05-14 11:42:41.751115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.691 [2024-05-14 11:42:41.751140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.692 [2024-05-14 11:42:41.751194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.692 [2024-05-14 11:42:41.751208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.692 #21 NEW cov: 12117 ft: 14868 corp: 15/205b lim: 30 exec/s: 0 rss: 71Mb L: 14/24 MS: 1 ChangeBit- 00:06:14.951 [2024-05-14 11:42:41.790996] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:14.951 [2024-05-14 11:42:41.791125] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff01 00:06:14.951 [2024-05-14 11:42:41.791316] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:06:14.951 [2024-05-14 11:42:41.791538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:41.791564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.951 [2024-05-14 11:42:41.791620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:41.791634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.951 [2024-05-14 11:42:41.791686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:41.791700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:14.951 [2024-05-14 11:42:41.791752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:000000f4 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:41.791765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:14.951 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:14.951 #22 NEW cov: 12140 ft: 14922 corp: 16/229b lim: 30 exec/s: 0 rss: 71Mb L: 24/24 MS: 1 CopyPart- 00:06:14.951 [2024-05-14 11:42:41.831056] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:14.951 [2024-05-14 11:42:41.831168] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (62464) > len (8) 00:06:14.951 [2024-05-14 11:42:41.831375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:41.831406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.951 [2024-05-14 11:42:41.831462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:41.831476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.951 #23 NEW cov: 12140 ft: 14946 corp: 17/243b lim: 30 exec/s: 0 rss: 71Mb L: 14/24 MS: 1 ChangeBit- 00:06:14.951 [2024-05-14 11:42:41.871135] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (11264) > buf size (4096) 00:06:14.951 [2024-05-14 11:42:41.871252] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f7ff 00:06:14.951 [2024-05-14 11:42:41.871467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:41.871493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.951 [2024-05-14 11:42:41.871546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:41.871560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.951 #24 NEW cov: 12140 ft: 14957 corp: 18/256b lim: 30 exec/s: 24 rss: 71Mb L: 13/24 MS: 1 ChangeBinInt- 00:06:14.951 [2024-05-14 11:42:41.911296] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:14.951 [2024-05-14 11:42:41.911417] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1 00:06:14.951 [2024-05-14 11:42:41.911624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:41.911653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.951 [2024-05-14 11:42:41.911709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:41.911722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.951 #25 NEW cov: 12140 ft: 14976 corp: 19/269b lim: 30 exec/s: 25 rss: 71Mb L: 13/24 MS: 1 ChangeBit- 00:06:14.951 [2024-05-14 11:42:41.951389] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:14.951 [2024-05-14 11:42:41.951507] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1000000ff 00:06:14.951 [2024-05-14 11:42:41.951710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:41.951736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.951 [2024-05-14 11:42:41.951788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:41.951801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.951 #26 NEW cov: 12140 ft: 14989 corp: 20/283b lim: 30 exec/s: 26 rss: 71Mb L: 14/24 MS: 1 CrossOver- 00:06:14.951 [2024-05-14 11:42:41.991530] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (11264) > buf size (4096) 00:06:14.951 [2024-05-14 11:42:41.991644] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:14.951 [2024-05-14 11:42:41.991847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:41.991872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.951 [2024-05-14 11:42:41.991924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:41.991938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.951 #27 NEW cov: 12140 ft: 15002 corp: 21/296b lim: 30 exec/s: 27 rss: 71Mb L: 13/24 MS: 1 ChangeBit- 00:06:14.951 [2024-05-14 11:42:42.031700] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:14.951 [2024-05-14 11:42:42.031815] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (11264) > buf size (4096) 00:06:14.951 [2024-05-14 11:42:42.031928] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:06:14.951 [2024-05-14 11:42:42.032146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:42.032172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.951 [2024-05-14 11:42:42.032227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0aff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:42.032241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.951 [2024-05-14 11:42:42.032291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.951 [2024-05-14 11:42:42.032304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.210 #28 NEW cov: 12140 ft: 15025 corp: 22/314b lim: 30 exec/s: 28 rss: 72Mb L: 18/24 MS: 1 CrossOver- 00:06:15.210 [2024-05-14 11:42:42.081842] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:15.210 [2024-05-14 11:42:42.082168] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:06:15.210 [2024-05-14 11:42:42.082383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.210 [2024-05-14 11:42:42.082410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.210 [2024-05-14 11:42:42.082465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.210 [2024-05-14 11:42:42.082479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.211 [2024-05-14 11:42:42.082534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.211 [2024-05-14 11:42:42.082548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.211 [2024-05-14 11:42:42.082602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.211 [2024-05-14 11:42:42.082616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.211 #29 NEW cov: 12140 ft: 15034 corp: 23/338b lim: 30 exec/s: 29 rss: 72Mb L: 24/24 MS: 1 CrossOver- 00:06:15.211 [2024-05-14 11:42:42.131940] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:15.211 [2024-05-14 11:42:42.132150] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:15.211 [2024-05-14 11:42:42.132359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.211 [2024-05-14 11:42:42.132388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.211 [2024-05-14 11:42:42.132442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.211 [2024-05-14 11:42:42.132456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.211 [2024-05-14 11:42:42.132507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.211 [2024-05-14 11:42:42.132520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.211 #30 NEW cov: 12140 ft: 15043 corp: 24/357b lim: 30 exec/s: 30 rss: 72Mb L: 19/24 MS: 1 CopyPart- 00:06:15.211 [2024-05-14 11:42:42.182085] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:15.211 [2024-05-14 11:42:42.182291] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (8448) > len (4) 00:06:15.211 [2024-05-14 11:42:42.182507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.211 [2024-05-14 11:42:42.182532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.211 [2024-05-14 11:42:42.182644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.211 [2024-05-14 11:42:42.182659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.211 NEW_FUNC[1/2]: 0x11784c0 in nvmf_ctrlr_unmask_aen /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2265 00:06:15.211 NEW_FUNC[2/2]: 0x1178740 in nvmf_get_error_log_page /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2319 00:06:15.211 #31 NEW cov: 12150 ft: 15109 corp: 25/379b lim: 30 exec/s: 31 rss: 72Mb L: 22/24 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:15.211 [2024-05-14 11:42:42.222206] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:15.211 [2024-05-14 11:42:42.222320] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:15.211 [2024-05-14 11:42:42.222724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.211 [2024-05-14 11:42:42.222750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.211 [2024-05-14 11:42:42.222804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.211 [2024-05-14 11:42:42.222818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.211 [2024-05-14 11:42:42.222869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.211 [2024-05-14 11:42:42.222882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.211 [2024-05-14 11:42:42.222935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.211 [2024-05-14 11:42:42.222948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.211 #32 NEW cov: 12150 ft: 15141 corp: 26/404b lim: 30 exec/s: 32 rss: 72Mb L: 25/25 MS: 1 InsertByte- 00:06:15.211 [2024-05-14 11:42:42.272407] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:15.211 [2024-05-14 11:42:42.272550] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffd1 00:06:15.211 [2024-05-14 11:42:42.272658] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261128) > buf size (4096) 00:06:15.211 [2024-05-14 11:42:42.272761] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (62464) > len (4) 00:06:15.211 [2024-05-14 11:42:42.272967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.211 [2024-05-14 11:42:42.272993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.211 [2024-05-14 11:42:42.273049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.211 [2024-05-14 11:42:42.273065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.211 [2024-05-14 11:42:42.273116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ff010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.211 [2024-05-14 11:42:42.273129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.211 [2024-05-14 11:42:42.273181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.211 [2024-05-14 11:42:42.273194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.469 #33 NEW cov: 12150 ft: 15162 corp: 27/430b lim: 30 exec/s: 33 rss: 72Mb L: 26/26 MS: 1 InsertByte- 00:06:15.469 [2024-05-14 11:42:42.322506] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:15.469 [2024-05-14 11:42:42.322638] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:15.469 [2024-05-14 11:42:42.322841] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (596252) > buf size (4096) 00:06:15.469 [2024-05-14 11:42:42.323054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.469 [2024-05-14 11:42:42.323080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.469 [2024-05-14 11:42:42.323132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.469 [2024-05-14 11:42:42.323146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.469 [2024-05-14 11:42:42.323196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.469 [2024-05-14 11:42:42.323209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.469 [2024-05-14 11:42:42.323260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:46460246 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.469 [2024-05-14 11:42:42.323273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:15.469 #34 NEW cov: 12150 ft: 15175 corp: 28/459b lim: 30 exec/s: 34 rss: 72Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:15.469 [2024-05-14 11:42:42.362541] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:15.469 [2024-05-14 11:42:42.362658] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1 00:06:15.469 [2024-05-14 11:42:42.362873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.469 [2024-05-14 11:42:42.362899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.469 [2024-05-14 11:42:42.362954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.469 [2024-05-14 11:42:42.362969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.469 #35 NEW cov: 12150 ft: 15197 corp: 29/472b lim: 30 exec/s: 35 rss: 72Mb L: 13/29 MS: 1 ChangeByte- 00:06:15.469 [2024-05-14 11:42:42.402663] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:15.469 [2024-05-14 11:42:42.402777] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xd3ff 00:06:15.469 [2024-05-14 11:42:42.402985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.469 [2024-05-14 11:42:42.403010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.469 [2024-05-14 11:42:42.403064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.469 [2024-05-14 11:42:42.403077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.469 #36 NEW cov: 12150 ft: 15206 corp: 30/486b lim: 30 exec/s: 36 rss: 72Mb L: 14/29 MS: 1 ChangeBinInt- 00:06:15.469 [2024-05-14 11:42:42.442771] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:15.469 [2024-05-14 11:42:42.442905] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xd3ff 00:06:15.469 [2024-05-14 11:42:42.443114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.469 [2024-05-14 11:42:42.443140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.469 [2024-05-14 11:42:42.443194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.469 [2024-05-14 11:42:42.443207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.469 #37 NEW cov: 12150 ft: 15215 corp: 31/500b lim: 30 exec/s: 37 rss: 73Mb L: 14/29 MS: 1 ChangeBit- 00:06:15.469 [2024-05-14 11:42:42.482881] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:15.469 [2024-05-14 11:42:42.483199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.469 [2024-05-14 11:42:42.483223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.469 #38 NEW cov: 12150 ft: 15246 corp: 32/513b lim: 30 exec/s: 38 rss: 73Mb L: 13/29 MS: 1 CrossOver- 00:06:15.469 [2024-05-14 11:42:42.522986] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261568) > buf size (4096) 00:06:15.469 [2024-05-14 11:42:42.523188] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:15.470 [2024-05-14 11:42:42.523395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff6f0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.470 [2024-05-14 11:42:42.523420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.470 [2024-05-14 11:42:42.523479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.470 [2024-05-14 11:42:42.523493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.470 [2024-05-14 11:42:42.523547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.470 [2024-05-14 11:42:42.523561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.470 #39 NEW cov: 12150 ft: 15263 corp: 33/533b lim: 30 exec/s: 39 rss: 73Mb L: 20/29 MS: 1 CopyPart- 00:06:15.729 [2024-05-14 11:42:42.573149] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261128) > buf size (4096) 00:06:15.729 [2024-05-14 11:42:42.573263] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:15.729 [2024-05-14 11:42:42.573474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.729 [2024-05-14 11:42:42.573503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.729 [2024-05-14 11:42:42.573556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.729 [2024-05-14 11:42:42.573569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.729 #40 NEW cov: 12150 ft: 15298 corp: 34/546b lim: 30 exec/s: 40 rss: 73Mb L: 13/29 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:15.729 [2024-05-14 11:42:42.613255] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (11136) > buf size (4096) 00:06:15.729 [2024-05-14 11:42:42.613371] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:15.730 [2024-05-14 11:42:42.613581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0adf0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.730 [2024-05-14 11:42:42.613607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.730 [2024-05-14 11:42:42.613662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.730 [2024-05-14 11:42:42.613676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.730 #41 NEW cov: 12150 ft: 15301 corp: 35/559b lim: 30 exec/s: 41 rss: 73Mb L: 13/29 MS: 1 ChangeBit- 00:06:15.730 [2024-05-14 11:42:42.653402] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (11264) > buf size (4096) 00:06:15.730 [2024-05-14 11:42:42.653516] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262148) > buf size (4096) 00:06:15.730 [2024-05-14 11:42:42.653615] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (71928) > buf size (4096) 00:06:15.730 [2024-05-14 11:42:42.653821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.730 [2024-05-14 11:42:42.653846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.730 [2024-05-14 11:42:42.653901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:000081f4 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.730 [2024-05-14 11:42:42.653915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.730 [2024-05-14 11:42:42.653966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:463d0085 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.730 [2024-05-14 11:42:42.653979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.730 #42 NEW cov: 12150 ft: 15311 corp: 36/580b lim: 30 exec/s: 42 rss: 73Mb L: 21/29 MS: 1 CMP- DE: "\364\325\223\330F=\205\000"- 00:06:15.730 [2024-05-14 11:42:42.693466] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:15.730 [2024-05-14 11:42:42.693578] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:06:15.730 [2024-05-14 11:42:42.693780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.730 [2024-05-14 11:42:42.693806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.730 [2024-05-14 11:42:42.693859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.730 [2024-05-14 11:42:42.693876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.730 #43 NEW cov: 12150 ft: 15315 corp: 37/597b lim: 30 exec/s: 43 rss: 73Mb L: 17/29 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:06:15.730 [2024-05-14 11:42:42.733582] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261568) > buf size (4096) 00:06:15.730 [2024-05-14 11:42:42.733882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff6f0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.730 [2024-05-14 11:42:42.733907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.730 [2024-05-14 11:42:42.733964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.730 [2024-05-14 11:42:42.733977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.730 #44 NEW cov: 12150 ft: 15330 corp: 38/610b lim: 30 exec/s: 44 rss: 73Mb L: 13/29 MS: 1 ChangeBinInt- 00:06:15.730 [2024-05-14 11:42:42.773750] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:15.730 [2024-05-14 11:42:42.773949] ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:06:15.730 [2024-05-14 11:42:42.774150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.730 [2024-05-14 11:42:42.774175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.730 [2024-05-14 11:42:42.774230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.730 [2024-05-14 11:42:42.774244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.730 [2024-05-14 11:42:42.774297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.730 [2024-05-14 11:42:42.774311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.730 #45 NEW cov: 12150 ft: 15334 corp: 39/629b lim: 30 exec/s: 45 rss: 73Mb L: 19/29 MS: 1 CopyPart- 00:06:15.989 [2024-05-14 11:42:42.823886] ctrlr.c:2624:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:06:15.989 [2024-05-14 11:42:42.824193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.989 [2024-05-14 11:42:42.824218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.989 [2024-05-14 11:42:42.824271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:15.989 [2024-05-14 11:42:42.824285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.989 #46 NEW cov: 12150 ft: 15337 corp: 40/643b lim: 30 exec/s: 46 rss: 73Mb L: 14/29 MS: 1 InsertByte- 00:06:15.989 #47 NEW cov: 12150 ft: 15371 corp: 41/650b lim: 30 exec/s: 23 rss: 73Mb L: 7/29 MS: 1 EraseBytes- 00:06:15.989 #47 DONE cov: 12150 ft: 15371 corp: 41/650b lim: 30 exec/s: 23 rss: 73Mb 00:06:15.989 ###### Recommended dictionary. ###### 00:06:15.989 "\377\377\377\377" # Uses: 1 00:06:15.989 "\001\000\000\000\000\000\000\000" # Uses: 2 00:06:15.989 "\036\000\000\000" # Uses: 0 00:06:15.989 "\364\325\223\330F=\205\000" # Uses: 0 00:06:15.989 ###### End of recommended dictionary. ###### 00:06:15.989 Done 47 runs in 2 second(s) 00:06:15.989 [2024-05-14 11:42:42.892174] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:15.989 11:42:43 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:15.989 [2024-05-14 11:42:43.059271] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:15.989 [2024-05-14 11:42:43.059343] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3633645 ] 00:06:16.249 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.249 [2024-05-14 11:42:43.310615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.508 [2024-05-14 11:42:43.400676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.508 [2024-05-14 11:42:43.459733] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.508 [2024-05-14 11:42:43.475692] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:16.508 [2024-05-14 11:42:43.476089] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:16.508 INFO: Running with entropic power schedule (0xFF, 100). 00:06:16.508 INFO: Seed: 2742415731 00:06:16.508 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:16.508 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:16.508 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:16.508 INFO: A corpus is not provided, starting from an empty corpus 00:06:16.508 #2 INITED exec/s: 0 rss: 63Mb 00:06:16.508 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:16.508 This may also happen if the target rejected all inputs we tried so far 00:06:16.766 NEW_FUNC[1/671]: 0x4850d0 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:16.766 NEW_FUNC[2/671]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:16.766 #3 NEW cov: 11675 ft: 11675 corp: 2/10b lim: 35 exec/s: 0 rss: 70Mb L: 9/9 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:16.766 [2024-05-14 11:42:43.831962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0100000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:16.766 [2024-05-14 11:42:43.831999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.093 NEW_FUNC[1/14]: 0x1719e50 in spdk_nvme_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:263 00:06:17.093 NEW_FUNC[2/14]: 0x171a090 in nvme_admin_qpair_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:202 00:06:17.093 #5 NEW cov: 11936 ft: 12822 corp: 3/19b lim: 35 exec/s: 0 rss: 71Mb L: 9/9 MS: 2 ShuffleBytes-PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:17.093 [2024-05-14 11:42:43.872049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a010068 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.093 [2024-05-14 11:42:43.872079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.093 #10 NEW cov: 11942 ft: 13029 corp: 4/28b lim: 35 exec/s: 0 rss: 71Mb L: 9/9 MS: 5 ChangeBit-ChangeBit-CopyPart-ChangeBit-CrossOver- 00:06:17.093 #11 NEW cov: 12027 ft: 13320 corp: 5/38b lim: 35 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 InsertByte- 00:06:17.093 #12 NEW cov: 12027 ft: 13370 corp: 6/48b lim: 35 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:17.093 #13 NEW cov: 12027 ft: 13495 corp: 7/58b lim: 35 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:17.093 [2024-05-14 11:42:44.032482] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.093 [2024-05-14 11:42:44.032747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:005d0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.093 [2024-05-14 11:42:44.032776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.093 #14 NEW cov: 12036 ft: 13827 corp: 8/77b lim: 35 exec/s: 0 rss: 71Mb L: 19/19 MS: 1 CopyPart- 00:06:17.093 #15 NEW cov: 12036 ft: 13867 corp: 9/87b lim: 35 exec/s: 0 rss: 71Mb L: 10/19 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:17.093 #16 NEW cov: 12036 ft: 13949 corp: 10/99b lim: 35 exec/s: 0 rss: 72Mb L: 12/19 MS: 1 CMP- DE: "\001\003"- 00:06:17.352 #17 NEW cov: 12036 ft: 14036 corp: 11/111b lim: 35 exec/s: 0 rss: 72Mb L: 12/19 MS: 1 CopyPart- 00:06:17.352 #18 NEW cov: 12036 ft: 14070 corp: 12/121b lim: 35 exec/s: 0 rss: 72Mb L: 10/19 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:17.352 #19 NEW cov: 12036 ft: 14085 corp: 13/133b lim: 35 exec/s: 0 rss: 72Mb L: 12/19 MS: 1 ChangeBinInt- 00:06:17.352 #20 NEW cov: 12036 ft: 14109 corp: 14/145b lim: 35 exec/s: 0 rss: 72Mb L: 12/19 MS: 1 ShuffleBytes- 00:06:17.352 #21 NEW cov: 12036 ft: 14135 corp: 15/155b lim: 35 exec/s: 0 rss: 72Mb L: 10/19 MS: 1 ChangeByte- 00:06:17.352 [2024-05-14 11:42:44.353357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.352 [2024-05-14 11:42:44.353388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.352 #22 NEW cov: 12036 ft: 14167 corp: 16/165b lim: 35 exec/s: 0 rss: 72Mb L: 10/19 MS: 1 ChangeBinInt- 00:06:17.352 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:17.352 #23 NEW cov: 12059 ft: 14208 corp: 17/175b lim: 35 exec/s: 0 rss: 72Mb L: 10/19 MS: 1 CrossOver- 00:06:17.612 #24 NEW cov: 12059 ft: 14224 corp: 18/185b lim: 35 exec/s: 0 rss: 72Mb L: 10/19 MS: 1 ChangeByte- 00:06:17.612 #25 NEW cov: 12059 ft: 14232 corp: 19/197b lim: 35 exec/s: 0 rss: 72Mb L: 12/19 MS: 1 CMP- DE: "\005\000"- 00:06:17.612 [2024-05-14 11:42:44.493761] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.612 [2024-05-14 11:42:44.493885] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.612 [2024-05-14 11:42:44.494147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:005d0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.612 [2024-05-14 11:42:44.494175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.612 [2024-05-14 11:42:44.494229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.612 [2024-05-14 11:42:44.494244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.612 #26 NEW cov: 12059 ft: 14468 corp: 20/224b lim: 35 exec/s: 26 rss: 72Mb L: 27/27 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:17.612 #27 NEW cov: 12059 ft: 14490 corp: 21/234b lim: 35 exec/s: 27 rss: 72Mb L: 10/27 MS: 1 ShuffleBytes- 00:06:17.612 [2024-05-14 11:42:44.584056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0100000a cdw11:0100000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.612 [2024-05-14 11:42:44.584082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.612 #28 NEW cov: 12059 ft: 14558 corp: 22/243b lim: 35 exec/s: 28 rss: 72Mb L: 9/27 MS: 1 CopyPart- 00:06:17.612 [2024-05-14 11:42:44.623968] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.612 [2024-05-14 11:42:44.624195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.612 [2024-05-14 11:42:44.624222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.612 #29 NEW cov: 12059 ft: 14578 corp: 23/253b lim: 35 exec/s: 29 rss: 72Mb L: 10/27 MS: 1 CrossOver- 00:06:17.612 [2024-05-14 11:42:44.674327] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.612 [2024-05-14 11:42:44.674466] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.612 [2024-05-14 11:42:44.674730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:005d0000 cdw11:ff00f4ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.612 [2024-05-14 11:42:44.674758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.612 [2024-05-14 11:42:44.674815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.612 [2024-05-14 11:42:44.674831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.871 #30 NEW cov: 12059 ft: 14623 corp: 24/280b lim: 35 exec/s: 30 rss: 73Mb L: 27/27 MS: 1 CMP- DE: "\364\377\377\377"- 00:06:17.871 #31 NEW cov: 12059 ft: 14650 corp: 25/290b lim: 35 exec/s: 31 rss: 73Mb L: 10/27 MS: 1 ChangeByte- 00:06:17.871 #32 NEW cov: 12059 ft: 14662 corp: 26/300b lim: 35 exec/s: 32 rss: 73Mb L: 10/27 MS: 1 EraseBytes- 00:06:17.871 #33 NEW cov: 12068 ft: 14687 corp: 27/313b lim: 35 exec/s: 33 rss: 73Mb L: 13/27 MS: 1 InsertByte- 00:06:17.871 [2024-05-14 11:42:44.814716] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.871 [2024-05-14 11:42:44.814992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:5d0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.871 [2024-05-14 11:42:44.815020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.871 #34 NEW cov: 12068 ft: 14697 corp: 28/331b lim: 35 exec/s: 34 rss: 73Mb L: 18/27 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:17.871 #35 NEW cov: 12068 ft: 14714 corp: 29/341b lim: 35 exec/s: 35 rss: 73Mb L: 10/27 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:17.871 [2024-05-14 11:42:44.894801] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:17.871 [2024-05-14 11:42:44.895030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.871 [2024-05-14 11:42:44.895058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.871 #36 NEW cov: 12068 ft: 14739 corp: 30/351b lim: 35 exec/s: 36 rss: 73Mb L: 10/27 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:17.871 #37 NEW cov: 12068 ft: 14750 corp: 31/360b lim: 35 exec/s: 37 rss: 73Mb L: 9/27 MS: 1 EraseBytes- 00:06:18.130 [2024-05-14 11:42:44.965248] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.130 [2024-05-14 11:42:44.965365] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.130 [2024-05-14 11:42:44.965484] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.130 [2024-05-14 11:42:44.965759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.130 [2024-05-14 11:42:44.965787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.130 [2024-05-14 11:42:44.965845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:5d005d00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.130 [2024-05-14 11:42:44.965861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.130 [2024-05-14 11:42:44.965916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.130 [2024-05-14 11:42:44.965931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.130 #38 NEW cov: 12068 ft: 15282 corp: 32/388b lim: 35 exec/s: 38 rss: 73Mb L: 28/28 MS: 1 CrossOver- 00:06:18.130 #39 NEW cov: 12068 ft: 15296 corp: 33/398b lim: 35 exec/s: 39 rss: 73Mb L: 10/28 MS: 1 ShuffleBytes- 00:06:18.130 #40 NEW cov: 12068 ft: 15321 corp: 34/410b lim: 35 exec/s: 40 rss: 73Mb L: 12/28 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:18.130 #41 NEW cov: 12068 ft: 15329 corp: 35/421b lim: 35 exec/s: 41 rss: 73Mb L: 11/28 MS: 1 EraseBytes- 00:06:18.130 [2024-05-14 11:42:45.125473] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.130 [2024-05-14 11:42:45.125592] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.130 [2024-05-14 11:42:45.125796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00010005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.130 [2024-05-14 11:42:45.125823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.130 [2024-05-14 11:42:45.125881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:01030000 cdw11:d0000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.130 [2024-05-14 11:42:45.125897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.130 NEW_FUNC[1/1]: 0x111aac0 in spdk_nvmf_ns_identify_iocs_specific /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2952 00:06:18.130 #42 NEW cov: 12084 ft: 15382 corp: 36/436b lim: 35 exec/s: 42 rss: 73Mb L: 15/28 MS: 1 PersAutoDict- DE: "\005\000"- 00:06:18.130 #43 NEW cov: 12084 ft: 15418 corp: 37/446b lim: 35 exec/s: 43 rss: 73Mb L: 10/28 MS: 1 ChangeBinInt- 00:06:18.389 #44 NEW cov: 12084 ft: 15419 corp: 38/457b lim: 35 exec/s: 44 rss: 73Mb L: 11/28 MS: 1 ChangeBinInt- 00:06:18.389 [2024-05-14 11:42:45.255770] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.389 [2024-05-14 11:42:45.255993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.389 [2024-05-14 11:42:45.256021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.389 #45 NEW cov: 12084 ft: 15465 corp: 39/467b lim: 35 exec/s: 45 rss: 74Mb L: 10/28 MS: 1 ChangeBinInt- 00:06:18.389 [2024-05-14 11:42:45.295908] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.389 [2024-05-14 11:42:45.296120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.389 [2024-05-14 11:42:45.296147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.389 #46 NEW cov: 12084 ft: 15512 corp: 40/477b lim: 35 exec/s: 46 rss: 74Mb L: 10/28 MS: 1 CopyPart- 00:06:18.389 [2024-05-14 11:42:45.336252] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.389 [2024-05-14 11:42:45.336388] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.389 [2024-05-14 11:42:45.336504] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.389 [2024-05-14 11:42:45.336608] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.389 [2024-05-14 11:42:45.336878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.389 [2024-05-14 11:42:45.336905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.389 [2024-05-14 11:42:45.336960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.389 [2024-05-14 11:42:45.336975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.389 [2024-05-14 11:42:45.337028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.389 [2024-05-14 11:42:45.337044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.389 [2024-05-14 11:42:45.337099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:81000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.389 [2024-05-14 11:42:45.337115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:18.389 #47 NEW cov: 12084 ft: 15665 corp: 41/512b lim: 35 exec/s: 47 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:18.389 [2024-05-14 11:42:45.376275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:005a0088 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.389 [2024-05-14 11:42:45.376299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.389 #48 NEW cov: 12084 ft: 15675 corp: 42/522b lim: 35 exec/s: 48 rss: 74Mb L: 10/35 MS: 1 ChangeByte- 00:06:18.389 [2024-05-14 11:42:45.416401] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.389 [2024-05-14 11:42:45.416686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:5d000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.389 [2024-05-14 11:42:45.416713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.389 #49 NEW cov: 12084 ft: 15684 corp: 43/539b lim: 35 exec/s: 49 rss: 74Mb L: 17/35 MS: 1 CopyPart- 00:06:18.389 [2024-05-14 11:42:45.456333] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.389 [2024-05-14 11:42:45.456571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00010000 cdw11:00000300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.389 [2024-05-14 11:42:45.456599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.649 #50 NEW cov: 12084 ft: 15691 corp: 44/549b lim: 35 exec/s: 50 rss: 74Mb L: 10/35 MS: 1 PersAutoDict- DE: "\001\003"- 00:06:18.649 [2024-05-14 11:42:45.497215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:6f006f6f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.649 [2024-05-14 11:42:45.497242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.649 [2024-05-14 11:42:45.497299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:6f6f006f cdw11:6f006f6f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.649 [2024-05-14 11:42:45.497312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.649 [2024-05-14 11:42:45.497366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:6f6f006f cdw11:6f006f6f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.649 [2024-05-14 11:42:45.497383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.649 [2024-05-14 11:42:45.497452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:6f6f006f cdw11:5d006f6f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:18.649 [2024-05-14 11:42:45.497465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:18.649 #51 NEW cov: 12091 ft: 15723 corp: 45/584b lim: 35 exec/s: 25 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:18.649 #51 DONE cov: 12091 ft: 15723 corp: 45/584b lim: 35 exec/s: 25 rss: 74Mb 00:06:18.649 ###### Recommended dictionary. ###### 00:06:18.649 "\001\000\000\000\000\000\000\000" # Uses: 4 00:06:18.649 "\001\003" # Uses: 1 00:06:18.649 "\005\000" # Uses: 1 00:06:18.649 "\000\000\000\000\000\000\000\000" # Uses: 3 00:06:18.649 "\364\377\377\377" # Uses: 0 00:06:18.649 ###### End of recommended dictionary. ###### 00:06:18.649 Done 51 runs in 2 second(s) 00:06:18.649 [2024-05-14 11:42:45.526765] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:18.649 11:42:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:18.649 [2024-05-14 11:42:45.693070] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:18.649 [2024-05-14 11:42:45.693149] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3634139 ] 00:06:18.649 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.908 [2024-05-14 11:42:45.942477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.167 [2024-05-14 11:42:46.035501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.167 [2024-05-14 11:42:46.094519] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.167 [2024-05-14 11:42:46.110470] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:19.167 [2024-05-14 11:42:46.110886] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:19.167 INFO: Running with entropic power schedule (0xFF, 100). 00:06:19.167 INFO: Seed: 1082455829 00:06:19.167 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:19.167 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:19.167 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:19.167 INFO: A corpus is not provided, starting from an empty corpus 00:06:19.167 #2 INITED exec/s: 0 rss: 64Mb 00:06:19.167 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:19.167 This may also happen if the target rejected all inputs we tried so far 00:06:19.426 NEW_FUNC[1/670]: 0x486da0 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:19.426 NEW_FUNC[2/670]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:19.426 #9 NEW cov: 11652 ft: 11653 corp: 2/5b lim: 20 exec/s: 0 rss: 70Mb L: 4/4 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:19.685 NEW_FUNC[1/4]: 0xfabb00 in posix_sock_group_impl_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1985 00:06:19.685 NEW_FUNC[2/4]: 0x1a33fa0 in spdk_sock_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/sock/sock.c:704 00:06:19.685 #10 NEW cov: 11847 ft: 12682 corp: 3/15b lim: 20 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:19.685 #11 NEW cov: 11853 ft: 13034 corp: 4/20b lim: 20 exec/s: 0 rss: 71Mb L: 5/10 MS: 1 CrossOver- 00:06:19.685 #12 NEW cov: 11938 ft: 13256 corp: 5/25b lim: 20 exec/s: 0 rss: 71Mb L: 5/10 MS: 1 ChangeBit- 00:06:19.685 #13 NEW cov: 11955 ft: 13745 corp: 6/42b lim: 20 exec/s: 0 rss: 71Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:06:19.685 [2024-05-14 11:42:46.699017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.685 [2024-05-14 11:42:46.699062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.685 NEW_FUNC[1/17]: 0x1183100 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3333 00:06:19.685 NEW_FUNC[2/17]: 0x1183c80 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3275 00:06:19.685 #19 NEW cov: 12198 ft: 14078 corp: 7/59b lim: 20 exec/s: 0 rss: 71Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:06:19.685 #20 NEW cov: 12198 ft: 14210 corp: 8/69b lim: 20 exec/s: 0 rss: 71Mb L: 10/17 MS: 1 ChangeBinInt- 00:06:19.944 [2024-05-14 11:42:46.789183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.944 [2024-05-14 11:42:46.789216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.944 #21 NEW cov: 12198 ft: 14287 corp: 9/86b lim: 20 exec/s: 0 rss: 71Mb L: 17/17 MS: 1 ChangeBit- 00:06:19.944 #22 NEW cov: 12198 ft: 14357 corp: 10/90b lim: 20 exec/s: 0 rss: 71Mb L: 4/17 MS: 1 ChangeBinInt- 00:06:19.944 #23 NEW cov: 12198 ft: 14383 corp: 11/107b lim: 20 exec/s: 0 rss: 72Mb L: 17/17 MS: 1 ShuffleBytes- 00:06:19.944 #24 NEW cov: 12202 ft: 14558 corp: 12/119b lim: 20 exec/s: 0 rss: 72Mb L: 12/17 MS: 1 CrossOver- 00:06:19.944 #25 NEW cov: 12202 ft: 14666 corp: 13/124b lim: 20 exec/s: 0 rss: 72Mb L: 5/17 MS: 1 ShuffleBytes- 00:06:19.944 #26 NEW cov: 12202 ft: 14701 corp: 14/128b lim: 20 exec/s: 0 rss: 72Mb L: 4/17 MS: 1 ChangeBit- 00:06:20.202 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:20.202 #27 NEW cov: 12225 ft: 14735 corp: 15/141b lim: 20 exec/s: 0 rss: 72Mb L: 13/17 MS: 1 InsertByte- 00:06:20.202 #28 NEW cov: 12225 ft: 14757 corp: 16/145b lim: 20 exec/s: 0 rss: 72Mb L: 4/17 MS: 1 CMP- DE: "\007\000\000\000"- 00:06:20.202 #29 NEW cov: 12225 ft: 14770 corp: 17/159b lim: 20 exec/s: 29 rss: 72Mb L: 14/17 MS: 1 CrossOver- 00:06:20.202 #30 NEW cov: 12225 ft: 14787 corp: 18/168b lim: 20 exec/s: 30 rss: 72Mb L: 9/17 MS: 1 PersAutoDict- DE: "\007\000\000\000"- 00:06:20.202 NEW_FUNC[1/2]: 0x12e80b0 in nvmf_transport_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:777 00:06:20.202 NEW_FUNC[2/2]: 0x13092b0 in nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3514 00:06:20.202 #31 NEW cov: 12280 ft: 14877 corp: 19/188b lim: 20 exec/s: 31 rss: 72Mb L: 20/20 MS: 1 CrossOver- 00:06:20.461 #32 NEW cov: 12280 ft: 14881 corp: 20/192b lim: 20 exec/s: 32 rss: 72Mb L: 4/20 MS: 1 ShuffleBytes- 00:06:20.461 #33 NEW cov: 12280 ft: 14895 corp: 21/210b lim: 20 exec/s: 33 rss: 72Mb L: 18/20 MS: 1 InsertByte- 00:06:20.461 #34 NEW cov: 12280 ft: 14906 corp: 22/228b lim: 20 exec/s: 34 rss: 72Mb L: 18/20 MS: 1 InsertByte- 00:06:20.461 #35 NEW cov: 12280 ft: 14920 corp: 23/245b lim: 20 exec/s: 35 rss: 72Mb L: 17/20 MS: 1 PersAutoDict- DE: "\007\000\000\000"- 00:06:20.461 #36 NEW cov: 12280 ft: 14935 corp: 24/251b lim: 20 exec/s: 36 rss: 72Mb L: 6/20 MS: 1 CrossOver- 00:06:20.461 #37 NEW cov: 12280 ft: 14946 corp: 25/264b lim: 20 exec/s: 37 rss: 72Mb L: 13/20 MS: 1 CMP- DE: "\252\325\343\250I=\205\000"- 00:06:20.720 #38 NEW cov: 12280 ft: 14963 corp: 26/269b lim: 20 exec/s: 38 rss: 72Mb L: 5/20 MS: 1 ChangeBinInt- 00:06:20.720 #39 NEW cov: 12280 ft: 14972 corp: 27/287b lim: 20 exec/s: 39 rss: 73Mb L: 18/20 MS: 1 ChangeByte- 00:06:20.720 #40 NEW cov: 12280 ft: 14983 corp: 28/300b lim: 20 exec/s: 40 rss: 73Mb L: 13/20 MS: 1 ChangeByte- 00:06:20.720 #41 NEW cov: 12280 ft: 15002 corp: 29/305b lim: 20 exec/s: 41 rss: 73Mb L: 5/20 MS: 1 InsertByte- 00:06:20.720 #42 NEW cov: 12280 ft: 15015 corp: 30/311b lim: 20 exec/s: 42 rss: 73Mb L: 6/20 MS: 1 InsertByte- 00:06:20.720 #43 NEW cov: 12280 ft: 15028 corp: 31/317b lim: 20 exec/s: 43 rss: 73Mb L: 6/20 MS: 1 ShuffleBytes- 00:06:20.979 #44 NEW cov: 12280 ft: 15060 corp: 32/323b lim: 20 exec/s: 44 rss: 73Mb L: 6/20 MS: 1 ChangeBit- 00:06:20.979 [2024-05-14 11:42:47.862477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.979 [2024-05-14 11:42:47.862522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.979 #45 NEW cov: 12280 ft: 15081 corp: 33/341b lim: 20 exec/s: 45 rss: 73Mb L: 18/20 MS: 1 InsertByte- 00:06:20.979 #46 NEW cov: 12280 ft: 15115 corp: 34/347b lim: 20 exec/s: 46 rss: 73Mb L: 6/20 MS: 1 InsertByte- 00:06:20.979 #47 NEW cov: 12280 ft: 15123 corp: 35/366b lim: 20 exec/s: 47 rss: 73Mb L: 19/20 MS: 1 InsertByte- 00:06:20.979 #48 NEW cov: 12280 ft: 15167 corp: 36/371b lim: 20 exec/s: 48 rss: 73Mb L: 5/20 MS: 1 ShuffleBytes- 00:06:20.979 #49 NEW cov: 12280 ft: 15214 corp: 37/377b lim: 20 exec/s: 49 rss: 73Mb L: 6/20 MS: 1 ChangeByte- 00:06:21.238 [2024-05-14 11:42:48.073084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:21.238 [2024-05-14 11:42:48.073124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.238 #50 NEW cov: 12280 ft: 15233 corp: 38/394b lim: 20 exec/s: 50 rss: 73Mb L: 17/20 MS: 1 CopyPart- 00:06:21.238 #51 NEW cov: 12280 ft: 15271 corp: 39/398b lim: 20 exec/s: 51 rss: 74Mb L: 4/20 MS: 1 ChangeBit- 00:06:21.238 #52 NEW cov: 12280 ft: 15279 corp: 40/402b lim: 20 exec/s: 26 rss: 74Mb L: 4/20 MS: 1 ChangeByte- 00:06:21.238 #52 DONE cov: 12280 ft: 15279 corp: 40/402b lim: 20 exec/s: 26 rss: 74Mb 00:06:21.238 ###### Recommended dictionary. ###### 00:06:21.238 "\007\000\000\000" # Uses: 2 00:06:21.238 "\252\325\343\250I=\205\000" # Uses: 0 00:06:21.238 ###### End of recommended dictionary. ###### 00:06:21.238 Done 52 runs in 2 second(s) 00:06:21.238 [2024-05-14 11:42:48.181575] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:21.238 11:42:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:21.497 [2024-05-14 11:42:48.349645] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:21.497 [2024-05-14 11:42:48.349712] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3634676 ] 00:06:21.497 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.756 [2024-05-14 11:42:48.601782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.756 [2024-05-14 11:42:48.692965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.756 [2024-05-14 11:42:48.751681] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.756 [2024-05-14 11:42:48.767640] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:21.756 [2024-05-14 11:42:48.768054] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:21.756 INFO: Running with entropic power schedule (0xFF, 100). 00:06:21.756 INFO: Seed: 3739452306 00:06:21.756 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:21.756 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:21.756 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:21.756 INFO: A corpus is not provided, starting from an empty corpus 00:06:21.756 #2 INITED exec/s: 0 rss: 64Mb 00:06:21.757 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:21.757 This may also happen if the target rejected all inputs we tried so far 00:06:21.757 [2024-05-14 11:42:48.813415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.757 [2024-05-14 11:42:48.813443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.757 [2024-05-14 11:42:48.813499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.757 [2024-05-14 11:42:48.813513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.324 NEW_FUNC[1/686]: 0x487e90 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:22.324 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:22.324 #5 NEW cov: 11827 ft: 11822 corp: 2/18b lim: 35 exec/s: 0 rss: 70Mb L: 17/17 MS: 3 InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:06:22.324 [2024-05-14 11:42:49.124138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d0a6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.124172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.324 [2024-05-14 11:42:49.124243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.124257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.324 #6 NEW cov: 11957 ft: 12426 corp: 3/36b lim: 35 exec/s: 0 rss: 70Mb L: 18/18 MS: 1 CrossOver- 00:06:22.324 [2024-05-14 11:42:49.174206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d0a6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.174234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.324 [2024-05-14 11:42:49.174288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.174302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.324 #7 NEW cov: 11963 ft: 12683 corp: 4/55b lim: 35 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 InsertByte- 00:06:22.324 [2024-05-14 11:42:49.214167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.214193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.324 #10 NEW cov: 12048 ft: 13609 corp: 5/65b lim: 35 exec/s: 0 rss: 70Mb L: 10/19 MS: 3 InsertByte-ChangeByte-CMP- DE: "\002\000\000\000\000\000\000\000"- 00:06:22.324 [2024-05-14 11:42:49.254588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.254614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.324 [2024-05-14 11:42:49.254669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:006d006d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.254683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.324 [2024-05-14 11:42:49.254737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d6d6d3b cdw11:6d270002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.254751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.324 #12 NEW cov: 12048 ft: 13897 corp: 6/86b lim: 35 exec/s: 0 rss: 71Mb L: 21/21 MS: 2 EraseBytes-CrossOver- 00:06:22.324 [2024-05-14 11:42:49.294827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.294853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.324 [2024-05-14 11:42:49.294923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:006d006d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.294937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.324 [2024-05-14 11:42:49.294990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d6d6d3b cdw11:6d270002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.295003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.324 [2024-05-14 11:42:49.295054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:006d006d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.295067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.324 #13 NEW cov: 12048 ft: 14282 corp: 7/120b lim: 35 exec/s: 0 rss: 71Mb L: 34/34 MS: 1 CopyPart- 00:06:22.324 [2024-05-14 11:42:49.344948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.344974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.324 [2024-05-14 11:42:49.345030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:006d006d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.345043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.324 [2024-05-14 11:42:49.345095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d6d6d3b cdw11:6d270002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.345109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.324 [2024-05-14 11:42:49.345162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ff6d008e cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.345175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.324 #14 NEW cov: 12048 ft: 14347 corp: 8/154b lim: 35 exec/s: 0 rss: 71Mb L: 34/34 MS: 1 ChangeBinInt- 00:06:22.324 [2024-05-14 11:42:49.394659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00020200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.324 [2024-05-14 11:42:49.394685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.583 #15 NEW cov: 12048 ft: 14386 corp: 9/164b lim: 35 exec/s: 0 rss: 71Mb L: 10/34 MS: 1 CopyPart- 00:06:22.583 [2024-05-14 11:42:49.434905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.583 [2024-05-14 11:42:49.434931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.583 [2024-05-14 11:42:49.434987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.583 [2024-05-14 11:42:49.435001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.583 #16 NEW cov: 12048 ft: 14482 corp: 10/181b lim: 35 exec/s: 0 rss: 71Mb L: 17/34 MS: 1 ChangeByte- 00:06:22.583 [2024-05-14 11:42:49.475183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.583 [2024-05-14 11:42:49.475211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.583 [2024-05-14 11:42:49.475285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:006d006d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.583 [2024-05-14 11:42:49.475302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.583 [2024-05-14 11:42:49.475360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d6d653b cdw11:6d270002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.583 [2024-05-14 11:42:49.475376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.583 #17 NEW cov: 12048 ft: 14515 corp: 11/202b lim: 35 exec/s: 0 rss: 71Mb L: 21/34 MS: 1 ChangeBit- 00:06:22.583 [2024-05-14 11:42:49.515451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.583 [2024-05-14 11:42:49.515477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.583 [2024-05-14 11:42:49.515535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:006d006d cdw11:6d020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.583 [2024-05-14 11:42:49.515549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.583 [2024-05-14 11:42:49.515605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.583 [2024-05-14 11:42:49.515618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.583 [2024-05-14 11:42:49.515673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:006d006d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.583 [2024-05-14 11:42:49.515686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.583 #18 NEW cov: 12048 ft: 14588 corp: 12/236b lim: 35 exec/s: 0 rss: 71Mb L: 34/34 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:06:22.583 [2024-05-14 11:42:49.555228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.583 [2024-05-14 11:42:49.555254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.583 [2024-05-14 11:42:49.555308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d6d6d6d cdw11:e86d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.583 [2024-05-14 11:42:49.555321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.583 #19 NEW cov: 12048 ft: 14658 corp: 13/254b lim: 35 exec/s: 0 rss: 71Mb L: 18/34 MS: 1 InsertByte- 00:06:22.583 [2024-05-14 11:42:49.595225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.583 [2024-05-14 11:42:49.595252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.583 #20 NEW cov: 12048 ft: 14675 corp: 14/263b lim: 35 exec/s: 0 rss: 71Mb L: 9/34 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:06:22.583 [2024-05-14 11:42:49.635636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.583 [2024-05-14 11:42:49.635661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.583 [2024-05-14 11:42:49.635717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:006d006d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.583 [2024-05-14 11:42:49.635731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.583 [2024-05-14 11:42:49.635783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d6d6533 cdw11:6d270002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.584 [2024-05-14 11:42:49.635796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.584 #21 NEW cov: 12048 ft: 14730 corp: 15/284b lim: 35 exec/s: 0 rss: 71Mb L: 21/34 MS: 1 ChangeBit- 00:06:22.843 [2024-05-14 11:42:49.675762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.675788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.843 [2024-05-14 11:42:49.675842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:006d006d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.675855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.843 [2024-05-14 11:42:49.675910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cd6d6533 cdw11:6d6d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.675923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.843 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:22.843 #22 NEW cov: 12071 ft: 14769 corp: 16/306b lim: 35 exec/s: 0 rss: 72Mb L: 22/34 MS: 1 InsertByte- 00:06:22.843 [2024-05-14 11:42:49.726047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.726073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.843 [2024-05-14 11:42:49.726125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:006d006d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.726142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.843 [2024-05-14 11:42:49.726194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:cd6d6533 cdw11:6d020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.726207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.843 [2024-05-14 11:42:49.726259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.726272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.843 #23 NEW cov: 12071 ft: 14777 corp: 17/336b lim: 35 exec/s: 0 rss: 72Mb L: 30/34 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:06:22.843 [2024-05-14 11:42:49.776050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.776076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.843 [2024-05-14 11:42:49.776131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:006d006d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.776145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.843 [2024-05-14 11:42:49.776197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d6d653b cdw11:6d270003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.776211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.843 #24 NEW cov: 12071 ft: 14876 corp: 18/357b lim: 35 exec/s: 0 rss: 72Mb L: 21/34 MS: 1 ChangeByte- 00:06:22.843 [2024-05-14 11:42:49.816168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.816193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.843 [2024-05-14 11:42:49.816250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d006d0a cdw11:6d000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.816263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.843 [2024-05-14 11:42:49.816316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d6d6d6d cdw11:3b6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.816330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.843 #25 NEW cov: 12071 ft: 14880 corp: 19/381b lim: 35 exec/s: 25 rss: 72Mb L: 24/34 MS: 1 CopyPart- 00:06:22.843 [2024-05-14 11:42:49.856087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.856112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.843 [2024-05-14 11:42:49.856168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.856182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.843 #26 NEW cov: 12071 ft: 14934 corp: 20/398b lim: 35 exec/s: 26 rss: 72Mb L: 17/34 MS: 1 ChangeByte- 00:06:22.843 [2024-05-14 11:42:49.896391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.896416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.843 [2024-05-14 11:42:49.896469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:006d006d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.896483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.843 [2024-05-14 11:42:49.896534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d6d653f cdw11:6d270003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.843 [2024-05-14 11:42:49.896548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.843 #27 NEW cov: 12071 ft: 14990 corp: 21/419b lim: 35 exec/s: 27 rss: 72Mb L: 21/34 MS: 1 ChangeBit- 00:06:23.103 [2024-05-14 11:42:49.936693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:02000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.103 [2024-05-14 11:42:49.936719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.103 [2024-05-14 11:42:49.936776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:006d0000 cdw11:6d6d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.103 [2024-05-14 11:42:49.936790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.103 [2024-05-14 11:42:49.936845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d006d00 cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.103 [2024-05-14 11:42:49.936859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.103 [2024-05-14 11:42:49.936913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:3f6d6d65 cdw11:6d6d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.103 [2024-05-14 11:42:49.936927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.103 #28 NEW cov: 12071 ft: 14996 corp: 22/448b lim: 35 exec/s: 28 rss: 72Mb L: 29/34 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:06:23.103 [2024-05-14 11:42:49.986651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.103 [2024-05-14 11:42:49.986676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.103 [2024-05-14 11:42:49.986731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d006d0a cdw11:6d000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.103 [2024-05-14 11:42:49.986745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.103 [2024-05-14 11:42:49.986798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d6d6d6d cdw11:3b6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.103 [2024-05-14 11:42:49.986811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.103 #29 NEW cov: 12071 ft: 15010 corp: 23/472b lim: 35 exec/s: 29 rss: 72Mb L: 24/34 MS: 1 ChangeBinInt- 00:06:23.103 [2024-05-14 11:42:50.037158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.103 [2024-05-14 11:42:50.037186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.103 [2024-05-14 11:42:50.037243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d006d00 cdw11:6d6d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.103 [2024-05-14 11:42:50.037258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.103 [2024-05-14 11:42:50.037309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.103 [2024-05-14 11:42:50.037323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.103 [2024-05-14 11:42:50.037376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:6d006d00 cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.103 [2024-05-14 11:42:50.037393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.104 [2024-05-14 11:42:50.037447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:3b6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.104 [2024-05-14 11:42:50.037460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.104 #30 NEW cov: 12071 ft: 15059 corp: 24/507b lim: 35 exec/s: 30 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:06:23.104 [2024-05-14 11:42:50.087127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.104 [2024-05-14 11:42:50.087156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.104 [2024-05-14 11:42:50.087212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:006d006d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.104 [2024-05-14 11:42:50.087226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.104 [2024-05-14 11:42:50.087282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:02006533 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.104 [2024-05-14 11:42:50.087296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.104 [2024-05-14 11:42:50.087353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00cd0000 cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.104 [2024-05-14 11:42:50.087366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.104 #31 NEW cov: 12071 ft: 15067 corp: 25/537b lim: 35 exec/s: 31 rss: 72Mb L: 30/35 MS: 1 PersAutoDict- DE: "\002\000\000\000\000\000\000\000"- 00:06:23.104 [2024-05-14 11:42:50.127118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.104 [2024-05-14 11:42:50.127145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.104 [2024-05-14 11:42:50.127202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d006d00 cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.104 [2024-05-14 11:42:50.127216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.104 [2024-05-14 11:42:50.127271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d6d6533 cdw11:6d270002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.104 [2024-05-14 11:42:50.127285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.104 #32 NEW cov: 12071 ft: 15072 corp: 26/558b lim: 35 exec/s: 32 rss: 72Mb L: 21/35 MS: 1 ShuffleBytes- 00:06:23.104 [2024-05-14 11:42:50.167371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.104 [2024-05-14 11:42:50.167401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.104 [2024-05-14 11:42:50.167458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d6d6d6d cdw11:e86d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.104 [2024-05-14 11:42:50.167471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.104 [2024-05-14 11:42:50.167525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:33333333 cdw11:33330000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.104 [2024-05-14 11:42:50.167538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.104 [2024-05-14 11:42:50.167589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:336d3333 cdw11:6d000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.104 [2024-05-14 11:42:50.167603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.363 #33 NEW cov: 12071 ft: 15091 corp: 27/587b lim: 35 exec/s: 33 rss: 72Mb L: 29/35 MS: 1 InsertRepeatedBytes- 00:06:23.363 [2024-05-14 11:42:50.217050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.363 [2024-05-14 11:42:50.217076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.363 #34 NEW cov: 12071 ft: 15099 corp: 28/597b lim: 35 exec/s: 34 rss: 72Mb L: 10/35 MS: 1 ChangeBinInt- 00:06:23.363 [2024-05-14 11:42:50.267211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00020200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.363 [2024-05-14 11:42:50.267236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.363 #35 NEW cov: 12071 ft: 15117 corp: 29/607b lim: 35 exec/s: 35 rss: 73Mb L: 10/35 MS: 1 ShuffleBytes- 00:06:23.363 [2024-05-14 11:42:50.307600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:ffff0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.363 [2024-05-14 11:42:50.307625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.363 [2024-05-14 11:42:50.307679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d000a6d cdw11:6d000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.363 [2024-05-14 11:42:50.307693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.363 [2024-05-14 11:42:50.307746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:65336d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.363 [2024-05-14 11:42:50.307760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.363 #36 NEW cov: 12071 ft: 15126 corp: 30/630b lim: 35 exec/s: 36 rss: 73Mb L: 23/35 MS: 1 CMP- DE: "\377\377"- 00:06:23.363 [2024-05-14 11:42:50.357460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.363 [2024-05-14 11:42:50.357485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.363 #37 NEW cov: 12071 ft: 15140 corp: 31/640b lim: 35 exec/s: 37 rss: 73Mb L: 10/35 MS: 1 ShuffleBytes- 00:06:23.363 [2024-05-14 11:42:50.408158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.363 [2024-05-14 11:42:50.408183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.363 [2024-05-14 11:42:50.408238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d006d00 cdw11:6d020002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.363 [2024-05-14 11:42:50.408252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.363 [2024-05-14 11:42:50.408305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.363 [2024-05-14 11:42:50.408319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.363 [2024-05-14 11:42:50.408374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:6d006d00 cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.363 [2024-05-14 11:42:50.408392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.363 [2024-05-14 11:42:50.408443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:3b6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.363 [2024-05-14 11:42:50.408456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.622 [2024-05-14 11:42:50.458373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.622 [2024-05-14 11:42:50.458419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.622 [2024-05-14 11:42:50.458471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:23020002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.622 [2024-05-14 11:42:50.458485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.622 [2024-05-14 11:42:50.458538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.622 [2024-05-14 11:42:50.458552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.622 [2024-05-14 11:42:50.458604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:6d006d00 cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.622 [2024-05-14 11:42:50.458617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.622 [2024-05-14 11:42:50.458669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:3b6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.622 [2024-05-14 11:42:50.458682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.622 #39 NEW cov: 12071 ft: 15146 corp: 32/675b lim: 35 exec/s: 39 rss: 73Mb L: 35/35 MS: 2 ShuffleBytes-ChangeBinInt- 00:06:23.622 [2024-05-14 11:42:50.498275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.622 [2024-05-14 11:42:50.498301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.622 [2024-05-14 11:42:50.498354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:006d006d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.622 [2024-05-14 11:42:50.498368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.623 [2024-05-14 11:42:50.498426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d6d6d3b cdw11:6d270002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.623 [2024-05-14 11:42:50.498440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.623 [2024-05-14 11:42:50.498493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ff6d008e cdw11:6d3f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.623 [2024-05-14 11:42:50.498506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.623 #40 NEW cov: 12071 ft: 15166 corp: 33/709b lim: 35 exec/s: 40 rss: 73Mb L: 34/35 MS: 1 ChangeByte- 00:06:23.623 [2024-05-14 11:42:50.547964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.623 [2024-05-14 11:42:50.547990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.623 #41 NEW cov: 12071 ft: 15168 corp: 34/720b lim: 35 exec/s: 41 rss: 73Mb L: 11/35 MS: 1 EraseBytes- 00:06:23.623 [2024-05-14 11:42:50.588558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.623 [2024-05-14 11:42:50.588585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.623 [2024-05-14 11:42:50.588640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:006d006d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.623 [2024-05-14 11:42:50.588654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.623 [2024-05-14 11:42:50.588707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d6d653b cdw11:6d020000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.623 [2024-05-14 11:42:50.588721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.623 [2024-05-14 11:42:50.588771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.623 [2024-05-14 11:42:50.588785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.623 #42 NEW cov: 12071 ft: 15180 corp: 35/750b lim: 35 exec/s: 42 rss: 73Mb L: 30/35 MS: 1 CrossOver- 00:06:23.623 [2024-05-14 11:42:50.628498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.623 [2024-05-14 11:42:50.628524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.623 [2024-05-14 11:42:50.628580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d65006d cdw11:006d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.623 [2024-05-14 11:42:50.628594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.623 [2024-05-14 11:42:50.628649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d6d6d3b cdw11:6d270002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.623 [2024-05-14 11:42:50.628663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.623 #43 NEW cov: 12071 ft: 15195 corp: 36/771b lim: 35 exec/s: 43 rss: 73Mb L: 21/35 MS: 1 ShuffleBytes- 00:06:23.623 [2024-05-14 11:42:50.668300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000204 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.623 [2024-05-14 11:42:50.668331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.623 #44 NEW cov: 12071 ft: 15213 corp: 37/781b lim: 35 exec/s: 44 rss: 73Mb L: 10/35 MS: 1 ChangeBit- 00:06:23.882 [2024-05-14 11:42:50.719046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d0a0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.882 [2024-05-14 11:42:50.719074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.882 [2024-05-14 11:42:50.719131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d736d00 cdw11:6d6d0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.882 [2024-05-14 11:42:50.719145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.882 [2024-05-14 11:42:50.719199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.882 [2024-05-14 11:42:50.719213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.882 [2024-05-14 11:42:50.719265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:6d006d00 cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.882 [2024-05-14 11:42:50.719279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.882 [2024-05-14 11:42:50.719330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:3b6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.882 [2024-05-14 11:42:50.719344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.882 #45 NEW cov: 12071 ft: 15250 corp: 38/816b lim: 35 exec/s: 45 rss: 73Mb L: 35/35 MS: 1 ChangeByte- 00:06:23.882 [2024-05-14 11:42:50.758669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:7d6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.882 [2024-05-14 11:42:50.758697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.882 [2024-05-14 11:42:50.758751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d6d6d6d cdw11:6d6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.882 [2024-05-14 11:42:50.758765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.882 #46 NEW cov: 12071 ft: 15255 corp: 39/833b lim: 35 exec/s: 46 rss: 73Mb L: 17/35 MS: 1 ChangeBit- 00:06:23.882 [2024-05-14 11:42:50.798993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:6d6d0200 cdw11:6d000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.882 [2024-05-14 11:42:50.799030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.883 [2024-05-14 11:42:50.799085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:6d006d0a cdw11:6d000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.883 [2024-05-14 11:42:50.799098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.883 [2024-05-14 11:42:50.799152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:6d306d6d cdw11:3b6d0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.883 [2024-05-14 11:42:50.799165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.883 #47 NEW cov: 12071 ft: 15274 corp: 40/857b lim: 35 exec/s: 23 rss: 73Mb L: 24/35 MS: 1 ChangeByte- 00:06:23.883 #47 DONE cov: 12071 ft: 15274 corp: 40/857b lim: 35 exec/s: 23 rss: 73Mb 00:06:23.883 ###### Recommended dictionary. ###### 00:06:23.883 "\002\000\000\000\000\000\000\000" # Uses: 5 00:06:23.883 "\377\377" # Uses: 0 00:06:23.883 ###### End of recommended dictionary. ###### 00:06:23.883 Done 47 runs in 2 second(s) 00:06:23.883 [2024-05-14 11:42:50.827943] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:23.883 11:42:50 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:24.142 [2024-05-14 11:42:50.998190] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:24.142 [2024-05-14 11:42:50.998265] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635148 ] 00:06:24.142 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.400 [2024-05-14 11:42:51.251600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.400 [2024-05-14 11:42:51.341139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.400 [2024-05-14 11:42:51.399847] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.400 [2024-05-14 11:42:51.415803] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:24.400 [2024-05-14 11:42:51.416192] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:24.400 INFO: Running with entropic power schedule (0xFF, 100). 00:06:24.400 INFO: Seed: 2092483620 00:06:24.400 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:24.400 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:24.400 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:24.400 INFO: A corpus is not provided, starting from an empty corpus 00:06:24.400 #2 INITED exec/s: 0 rss: 63Mb 00:06:24.400 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:24.400 This may also happen if the target rejected all inputs we tried so far 00:06:24.658 [2024-05-14 11:42:51.493002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.658 [2024-05-14 11:42:51.493038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.917 NEW_FUNC[1/686]: 0x48a020 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:24.917 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:24.917 #11 NEW cov: 11838 ft: 11836 corp: 2/17b lim: 45 exec/s: 0 rss: 70Mb L: 16/16 MS: 4 CrossOver-InsertByte-CrossOver-InsertRepeatedBytes- 00:06:24.917 [2024-05-14 11:42:51.823118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.917 [2024-05-14 11:42:51.823158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.917 #22 NEW cov: 11968 ft: 12662 corp: 3/33b lim: 45 exec/s: 0 rss: 70Mb L: 16/16 MS: 1 ChangeBit- 00:06:24.917 [2024-05-14 11:42:51.873111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.917 [2024-05-14 11:42:51.873141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.917 #23 NEW cov: 11974 ft: 12961 corp: 4/49b lim: 45 exec/s: 0 rss: 70Mb L: 16/16 MS: 1 ChangeBit- 00:06:24.917 [2024-05-14 11:42:51.913074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.917 [2024-05-14 11:42:51.913102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.917 #29 NEW cov: 12059 ft: 13183 corp: 5/65b lim: 45 exec/s: 0 rss: 70Mb L: 16/16 MS: 1 ChangeBit- 00:06:24.917 [2024-05-14 11:42:51.953334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.917 [2024-05-14 11:42:51.953362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.917 #30 NEW cov: 12059 ft: 13263 corp: 6/77b lim: 45 exec/s: 0 rss: 70Mb L: 12/16 MS: 1 CrossOver- 00:06:24.917 [2024-05-14 11:42:52.003507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.917 [2024-05-14 11:42:52.003545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.176 #36 NEW cov: 12059 ft: 13386 corp: 7/90b lim: 45 exec/s: 0 rss: 70Mb L: 13/16 MS: 1 EraseBytes- 00:06:25.176 [2024-05-14 11:42:52.053427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.176 [2024-05-14 11:42:52.053455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.176 #41 NEW cov: 12059 ft: 13437 corp: 8/107b lim: 45 exec/s: 0 rss: 70Mb L: 17/17 MS: 5 ChangeBit-InsertByte-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:06:25.176 [2024-05-14 11:42:52.094343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:20202020 cdw11:20200001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.176 [2024-05-14 11:42:52.094371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.176 [2024-05-14 11:42:52.094504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:20202020 cdw11:20200001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.176 [2024-05-14 11:42:52.094526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.176 [2024-05-14 11:42:52.094647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:20202020 cdw11:20200001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.176 [2024-05-14 11:42:52.094666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.176 [2024-05-14 11:42:52.094793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:20202020 cdw11:20200001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.176 [2024-05-14 11:42:52.094810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.176 #42 NEW cov: 12059 ft: 14300 corp: 9/146b lim: 45 exec/s: 0 rss: 71Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:06:25.176 [2024-05-14 11:42:52.133845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.176 [2024-05-14 11:42:52.133875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.176 #43 NEW cov: 12059 ft: 14418 corp: 10/163b lim: 45 exec/s: 0 rss: 71Mb L: 17/39 MS: 1 CMP- DE: "1\277\017`2\177\000\000"- 00:06:25.176 [2024-05-14 11:42:52.183801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.176 [2024-05-14 11:42:52.183829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.176 #44 NEW cov: 12059 ft: 14474 corp: 11/176b lim: 45 exec/s: 0 rss: 71Mb L: 13/39 MS: 1 CopyPart- 00:06:25.176 [2024-05-14 11:42:52.234120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.176 [2024-05-14 11:42:52.234148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.176 #45 NEW cov: 12059 ft: 14529 corp: 12/189b lim: 45 exec/s: 0 rss: 71Mb L: 13/39 MS: 1 ChangeBit- 00:06:25.435 [2024-05-14 11:42:52.273814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000080 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.435 [2024-05-14 11:42:52.273845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.435 #46 NEW cov: 12059 ft: 14592 corp: 13/201b lim: 45 exec/s: 0 rss: 71Mb L: 12/39 MS: 1 ChangeBit- 00:06:25.435 [2024-05-14 11:42:52.324374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.435 [2024-05-14 11:42:52.324407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.435 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:25.435 #47 NEW cov: 12082 ft: 14639 corp: 14/213b lim: 45 exec/s: 0 rss: 71Mb L: 12/39 MS: 1 CopyPart- 00:06:25.435 [2024-05-14 11:42:52.364348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.435 [2024-05-14 11:42:52.364378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.435 [2024-05-14 11:42:52.364518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.435 [2024-05-14 11:42:52.364537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.435 #48 NEW cov: 12082 ft: 14928 corp: 15/233b lim: 45 exec/s: 0 rss: 71Mb L: 20/39 MS: 1 CopyPart- 00:06:25.435 [2024-05-14 11:42:52.404135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.435 [2024-05-14 11:42:52.404172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.435 #49 NEW cov: 12082 ft: 14961 corp: 16/249b lim: 45 exec/s: 0 rss: 71Mb L: 16/39 MS: 1 EraseBytes- 00:06:25.436 [2024-05-14 11:42:52.454349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41450002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.436 [2024-05-14 11:42:52.454384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.436 #50 NEW cov: 12082 ft: 14997 corp: 17/265b lim: 45 exec/s: 50 rss: 71Mb L: 16/39 MS: 1 ChangeBit- 00:06:25.436 [2024-05-14 11:42:52.504441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:bf0f0a31 cdw11:60320003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.436 [2024-05-14 11:42:52.504470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.436 #51 NEW cov: 12082 ft: 15009 corp: 18/274b lim: 45 exec/s: 51 rss: 71Mb L: 9/39 MS: 1 PersAutoDict- DE: "1\277\017`2\177\000\000"- 00:06:25.694 [2024-05-14 11:42:52.544765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.694 [2024-05-14 11:42:52.544794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.694 #52 NEW cov: 12082 ft: 15015 corp: 19/286b lim: 45 exec/s: 52 rss: 71Mb L: 12/39 MS: 1 ChangeBinInt- 00:06:25.694 [2024-05-14 11:42:52.584632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.694 [2024-05-14 11:42:52.584660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.694 #53 NEW cov: 12082 ft: 15022 corp: 20/303b lim: 45 exec/s: 53 rss: 71Mb L: 17/39 MS: 1 CopyPart- 00:06:25.694 [2024-05-14 11:42:52.624804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000080 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.694 [2024-05-14 11:42:52.624833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.694 #54 NEW cov: 12082 ft: 15031 corp: 21/320b lim: 45 exec/s: 54 rss: 71Mb L: 17/39 MS: 1 CopyPart- 00:06:25.694 [2024-05-14 11:42:52.675163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff00f7 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.694 [2024-05-14 11:42:52.675190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.694 #55 NEW cov: 12082 ft: 15056 corp: 22/332b lim: 45 exec/s: 55 rss: 72Mb L: 12/39 MS: 1 ChangeBinInt- 00:06:25.694 [2024-05-14 11:42:52.715343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:41410041 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.694 [2024-05-14 11:42:52.715370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.694 #56 NEW cov: 12082 ft: 15082 corp: 23/345b lim: 45 exec/s: 56 rss: 72Mb L: 13/39 MS: 1 CrossOver- 00:06:25.694 [2024-05-14 11:42:52.755534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:bf0f0a31 cdw11:e0320003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.694 [2024-05-14 11:42:52.755563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.694 #57 NEW cov: 12082 ft: 15098 corp: 24/354b lim: 45 exec/s: 57 rss: 72Mb L: 9/39 MS: 1 ChangeBit- 00:06:25.953 [2024-05-14 11:42:52.795711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.953 [2024-05-14 11:42:52.795739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.953 #58 NEW cov: 12082 ft: 15180 corp: 25/367b lim: 45 exec/s: 58 rss: 72Mb L: 13/39 MS: 1 CrossOver- 00:06:25.953 [2024-05-14 11:42:52.835976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:80000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.953 [2024-05-14 11:42:52.836002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.953 [2024-05-14 11:42:52.836117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.953 [2024-05-14 11:42:52.836135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.953 #59 NEW cov: 12082 ft: 15198 corp: 26/385b lim: 45 exec/s: 59 rss: 72Mb L: 18/39 MS: 1 InsertByte- 00:06:25.953 [2024-05-14 11:42:52.886110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00310005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.953 [2024-05-14 11:42:52.886139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.953 [2024-05-14 11:42:52.886255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0000327f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.953 [2024-05-14 11:42:52.886272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.953 #60 NEW cov: 12082 ft: 15202 corp: 27/409b lim: 45 exec/s: 60 rss: 72Mb L: 24/39 MS: 1 PersAutoDict- DE: "1\277\017`2\177\000\000"- 00:06:25.953 [2024-05-14 11:42:52.936742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:20202020 cdw11:20200001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.953 [2024-05-14 11:42:52.936771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.953 [2024-05-14 11:42:52.936888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:bf0f2031 cdw11:60320003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.953 [2024-05-14 11:42:52.936907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.953 [2024-05-14 11:42:52.937028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:20202020 cdw11:20200001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.953 [2024-05-14 11:42:52.937044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.953 [2024-05-14 11:42:52.937167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:20202020 cdw11:20200001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.953 [2024-05-14 11:42:52.937186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.953 #61 NEW cov: 12082 ft: 15211 corp: 28/448b lim: 45 exec/s: 61 rss: 72Mb L: 39/39 MS: 1 PersAutoDict- DE: "1\277\017`2\177\000\000"- 00:06:25.953 [2024-05-14 11:42:52.986114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.953 [2024-05-14 11:42:52.986142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.953 #62 NEW cov: 12082 ft: 15223 corp: 29/461b lim: 45 exec/s: 62 rss: 72Mb L: 13/39 MS: 1 CMP- DE: "\010\000"- 00:06:25.953 [2024-05-14 11:42:53.026332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41450002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.953 [2024-05-14 11:42:53.026361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.212 #63 NEW cov: 12082 ft: 15299 corp: 30/478b lim: 45 exec/s: 63 rss: 73Mb L: 17/39 MS: 1 InsertByte- 00:06:26.212 [2024-05-14 11:42:53.076496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00400000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.212 [2024-05-14 11:42:53.076524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.212 #64 NEW cov: 12082 ft: 15332 corp: 31/491b lim: 45 exec/s: 64 rss: 73Mb L: 13/39 MS: 1 ChangeBit- 00:06:26.212 [2024-05-14 11:42:53.117372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:20202020 cdw11:20200001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.212 [2024-05-14 11:42:53.117406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.212 [2024-05-14 11:42:53.117525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:20202020 cdw11:20200004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.212 [2024-05-14 11:42:53.117542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.212 [2024-05-14 11:42:53.117664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:20208181 cdw11:20200001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.212 [2024-05-14 11:42:53.117681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.212 [2024-05-14 11:42:53.117804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:20202020 cdw11:20200001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.212 [2024-05-14 11:42:53.117821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:26.212 #65 NEW cov: 12082 ft: 15354 corp: 32/535b lim: 45 exec/s: 65 rss: 73Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:06:26.212 [2024-05-14 11:42:53.156977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff2121 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.212 [2024-05-14 11:42:53.157003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.212 [2024-05-14 11:42:53.157128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.212 [2024-05-14 11:42:53.157145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.212 #70 NEW cov: 12082 ft: 15384 corp: 33/560b lim: 45 exec/s: 70 rss: 73Mb L: 25/44 MS: 5 InsertByte-ChangeByte-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:06:26.212 [2024-05-14 11:42:53.196885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.212 [2024-05-14 11:42:53.196914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.212 #71 NEW cov: 12082 ft: 15396 corp: 34/573b lim: 45 exec/s: 71 rss: 73Mb L: 13/44 MS: 1 ChangeBit- 00:06:26.212 [2024-05-14 11:42:53.237193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:80000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.212 [2024-05-14 11:42:53.237222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.212 [2024-05-14 11:42:53.237338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.212 [2024-05-14 11:42:53.237363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.212 #72 NEW cov: 12082 ft: 15407 corp: 35/599b lim: 45 exec/s: 72 rss: 73Mb L: 26/44 MS: 1 InsertRepeatedBytes- 00:06:26.212 [2024-05-14 11:42:53.287031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000080 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.212 [2024-05-14 11:42:53.287060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.472 #73 NEW cov: 12082 ft: 15420 corp: 36/616b lim: 45 exec/s: 73 rss: 73Mb L: 17/44 MS: 1 ShuffleBytes- 00:06:26.472 [2024-05-14 11:42:53.327242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:06000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.472 [2024-05-14 11:42:53.327271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.472 #74 NEW cov: 12082 ft: 15434 corp: 37/629b lim: 45 exec/s: 74 rss: 73Mb L: 13/44 MS: 1 ChangeBinInt- 00:06:26.472 [2024-05-14 11:42:53.377616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.472 [2024-05-14 11:42:53.377643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.472 [2024-05-14 11:42:53.377759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.472 [2024-05-14 11:42:53.377776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.472 #75 NEW cov: 12082 ft: 15450 corp: 38/650b lim: 45 exec/s: 75 rss: 73Mb L: 21/44 MS: 1 InsertByte- 00:06:26.472 [2024-05-14 11:42:53.417942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.472 [2024-05-14 11:42:53.417971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.472 [2024-05-14 11:42:53.418106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41410002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.472 [2024-05-14 11:42:53.418126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.472 [2024-05-14 11:42:53.418262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.472 [2024-05-14 11:42:53.418280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.472 #76 NEW cov: 12082 ft: 15681 corp: 39/679b lim: 45 exec/s: 76 rss: 73Mb L: 29/44 MS: 1 CrossOver- 00:06:26.472 [2024-05-14 11:42:53.458075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:bf0f0a31 cdw11:60000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.472 [2024-05-14 11:42:53.458103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.472 [2024-05-14 11:42:53.458231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.472 [2024-05-14 11:42:53.458248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.472 [2024-05-14 11:42:53.458371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:26.472 [2024-05-14 11:42:53.458394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.472 #77 NEW cov: 12082 ft: 15692 corp: 40/708b lim: 45 exec/s: 38 rss: 73Mb L: 29/44 MS: 1 InsertRepeatedBytes- 00:06:26.472 #77 DONE cov: 12082 ft: 15692 corp: 40/708b lim: 45 exec/s: 38 rss: 73Mb 00:06:26.472 ###### Recommended dictionary. ###### 00:06:26.472 "1\277\017`2\177\000\000" # Uses: 3 00:06:26.472 "\010\000" # Uses: 0 00:06:26.472 ###### End of recommended dictionary. ###### 00:06:26.472 Done 77 runs in 2 second(s) 00:06:26.472 [2024-05-14 11:42:53.481778] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:26.732 11:42:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:26.732 [2024-05-14 11:42:53.650995] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:26.732 [2024-05-14 11:42:53.651070] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635498 ] 00:06:26.732 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.991 [2024-05-14 11:42:53.909742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.991 [2024-05-14 11:42:53.996630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.991 [2024-05-14 11:42:54.056120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.991 [2024-05-14 11:42:54.072072] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:26.991 [2024-05-14 11:42:54.072486] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:27.250 INFO: Running with entropic power schedule (0xFF, 100). 00:06:27.250 INFO: Seed: 453515702 00:06:27.250 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:27.250 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:27.250 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:27.250 INFO: A corpus is not provided, starting from an empty corpus 00:06:27.250 #2 INITED exec/s: 0 rss: 63Mb 00:06:27.250 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:27.250 This may also happen if the target rejected all inputs we tried so far 00:06:27.250 [2024-05-14 11:42:54.137908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:27.250 [2024-05-14 11:42:54.137939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.250 [2024-05-14 11:42:54.137993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.250 [2024-05-14 11:42:54.138007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.250 [2024-05-14 11:42:54.138058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.250 [2024-05-14 11:42:54.138072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.509 NEW_FUNC[1/684]: 0x48c830 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:27.509 NEW_FUNC[2/684]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:27.509 #3 NEW cov: 11755 ft: 11756 corp: 2/7b lim: 10 exec/s: 0 rss: 70Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:06:27.509 [2024-05-14 11:42:54.469238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:27.509 [2024-05-14 11:42:54.469295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.509 [2024-05-14 11:42:54.469393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:27.509 [2024-05-14 11:42:54.469420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.509 [2024-05-14 11:42:54.469498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:27.509 [2024-05-14 11:42:54.469523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.509 [2024-05-14 11:42:54.469602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.509 [2024-05-14 11:42:54.469627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.509 [2024-05-14 11:42:54.469705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.509 [2024-05-14 11:42:54.469730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.509 #4 NEW cov: 11885 ft: 12573 corp: 3/17b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:06:27.509 [2024-05-14 11:42:54.518636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000eb3 cdw11:00000000 00:06:27.509 [2024-05-14 11:42:54.518661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.509 #7 NEW cov: 11891 ft: 13143 corp: 4/19b lim: 10 exec/s: 0 rss: 70Mb L: 2/10 MS: 3 ChangeBit-ShuffleBytes-InsertByte- 00:06:27.509 [2024-05-14 11:42:54.559094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:27.509 [2024-05-14 11:42:54.559122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.509 [2024-05-14 11:42:54.559180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.509 [2024-05-14 11:42:54.559193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.509 [2024-05-14 11:42:54.559245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.509 [2024-05-14 11:42:54.559259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.509 [2024-05-14 11:42:54.559313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.509 [2024-05-14 11:42:54.559326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.509 #8 NEW cov: 11976 ft: 13441 corp: 5/27b lim: 10 exec/s: 0 rss: 70Mb L: 8/10 MS: 1 CopyPart- 00:06:27.769 [2024-05-14 11:42:54.599374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.599405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.599457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.599471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.599525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.599539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.599594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000002f cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.599607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.599660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.599673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.769 #9 NEW cov: 11976 ft: 13599 corp: 6/37b lim: 10 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 ChangeByte- 00:06:27.769 [2024-05-14 11:42:54.649254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b3e8 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.649280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.649335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.649349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.649406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.649420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.769 #11 NEW cov: 11976 ft: 13651 corp: 7/44b lim: 10 exec/s: 0 rss: 71Mb L: 7/10 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:27.769 [2024-05-14 11:42:54.689471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.689496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.689554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.689568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.689623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.689637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.689695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.689708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.769 #12 NEW cov: 11976 ft: 13679 corp: 8/52b lim: 10 exec/s: 0 rss: 71Mb L: 8/10 MS: 1 ChangeBinInt- 00:06:27.769 [2024-05-14 11:42:54.739735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afb cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.739760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.739811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.739825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.739879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.739909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.739964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000002f cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.739978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.740031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.740044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:27.769 #13 NEW cov: 11976 ft: 13734 corp: 9/62b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 ChangeBinInt- 00:06:27.769 [2024-05-14 11:42:54.789653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004a00 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.789678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.789735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.789748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.789805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.789818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.769 #14 NEW cov: 11976 ft: 13746 corp: 10/68b lim: 10 exec/s: 0 rss: 71Mb L: 6/10 MS: 1 ChangeBit- 00:06:27.769 [2024-05-14 11:42:54.829823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.829848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.829903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.829919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.829973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.829987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.769 [2024-05-14 11:42:54.830039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004000 cdw11:00000000 00:06:27.769 [2024-05-14 11:42:54.830053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.769 #15 NEW cov: 11976 ft: 13887 corp: 11/76b lim: 10 exec/s: 0 rss: 71Mb L: 8/10 MS: 1 ChangeBit- 00:06:28.028 [2024-05-14 11:42:54.869869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b3e8 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:54.869894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:54.869952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e9e8 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:54.869965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:54.870022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:54.870035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.028 #16 NEW cov: 11976 ft: 13928 corp: 12/83b lim: 10 exec/s: 0 rss: 71Mb L: 7/10 MS: 1 ChangeBit- 00:06:28.028 [2024-05-14 11:42:54.910197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b3e8 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:54.910222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:54.910276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e9e8 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:54.910290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:54.910345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:54.910358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:54.910413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000e9e8 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:54.910426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:54.910478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:54.910492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.028 #17 NEW cov: 11976 ft: 13967 corp: 13/93b lim: 10 exec/s: 0 rss: 71Mb L: 10/10 MS: 1 CopyPart- 00:06:28.028 [2024-05-14 11:42:54.960242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:54.960267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:54.960320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:54.960336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:54.960391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000002e cdw11:00000000 00:06:28.028 [2024-05-14 11:42:54.960405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:54.960458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:54.960471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.028 #18 NEW cov: 11976 ft: 13984 corp: 14/101b lim: 10 exec/s: 0 rss: 71Mb L: 8/10 MS: 1 ChangeByte- 00:06:28.028 [2024-05-14 11:42:55.000351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e8e9 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:55.000376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:55.000440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:55.000454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:55.000523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e8e9 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:55.000537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:55.000595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:55.000608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.028 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:28.028 #19 NEW cov: 11999 ft: 14100 corp: 15/110b lim: 10 exec/s: 0 rss: 72Mb L: 9/10 MS: 1 EraseBytes- 00:06:28.028 [2024-05-14 11:42:55.050377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b3fe cdw11:00000000 00:06:28.028 [2024-05-14 11:42:55.050407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:55.050464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:55.050477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:55.050533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:55.050546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.028 #20 NEW cov: 11999 ft: 14142 corp: 16/117b lim: 10 exec/s: 0 rss: 72Mb L: 7/10 MS: 1 ChangeByte- 00:06:28.028 [2024-05-14 11:42:55.090493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b3e8 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:55.090519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:55.090574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:55.090587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.028 [2024-05-14 11:42:55.090642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e823 cdw11:00000000 00:06:28.028 [2024-05-14 11:42:55.090659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.028 #21 NEW cov: 11999 ft: 14184 corp: 17/124b lim: 10 exec/s: 0 rss: 72Mb L: 7/10 MS: 1 ChangeByte- 00:06:28.287 [2024-05-14 11:42:55.130491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.130517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.287 [2024-05-14 11:42:55.130575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.130589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.287 #22 NEW cov: 11999 ft: 14347 corp: 18/129b lim: 10 exec/s: 22 rss: 72Mb L: 5/10 MS: 1 CrossOver- 00:06:28.287 [2024-05-14 11:42:55.170762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.170789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.287 [2024-05-14 11:42:55.170846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.170860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.287 [2024-05-14 11:42:55.170914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.170928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.287 #23 NEW cov: 11999 ft: 14381 corp: 19/136b lim: 10 exec/s: 23 rss: 72Mb L: 7/10 MS: 1 EraseBytes- 00:06:28.287 [2024-05-14 11:42:55.210568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c18b cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.210594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.287 #26 NEW cov: 11999 ft: 14418 corp: 20/138b lim: 10 exec/s: 26 rss: 72Mb L: 2/10 MS: 3 ChangeBit-ChangeBit-InsertByte- 00:06:28.287 [2024-05-14 11:42:55.250700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000c1a4 cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.250727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.287 #27 NEW cov: 11999 ft: 14504 corp: 21/141b lim: 10 exec/s: 27 rss: 72Mb L: 3/10 MS: 1 InsertByte- 00:06:28.287 [2024-05-14 11:42:55.291302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afb cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.291329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.287 [2024-05-14 11:42:55.291389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.291402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.287 [2024-05-14 11:42:55.291453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.291467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.287 [2024-05-14 11:42:55.291519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000002f cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.291532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.287 [2024-05-14 11:42:55.291587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.291600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.287 #28 NEW cov: 11999 ft: 14570 corp: 22/151b lim: 10 exec/s: 28 rss: 72Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:28.287 [2024-05-14 11:42:55.341325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.341351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.287 [2024-05-14 11:42:55.341410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.341424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.287 [2024-05-14 11:42:55.341475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.341489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.287 [2024-05-14 11:42:55.341542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff08 cdw11:00000000 00:06:28.287 [2024-05-14 11:42:55.341555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.287 #29 NEW cov: 11999 ft: 14577 corp: 23/159b lim: 10 exec/s: 29 rss: 72Mb L: 8/10 MS: 1 CrossOver- 00:06:28.546 [2024-05-14 11:42:55.381503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.381528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.381587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.381601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.381657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.381670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.381726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000208 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.381739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.546 #30 NEW cov: 11999 ft: 14587 corp: 24/167b lim: 10 exec/s: 30 rss: 72Mb L: 8/10 MS: 1 ChangeBinInt- 00:06:28.546 [2024-05-14 11:42:55.421687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afb cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.421712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.421768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.421782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.421836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.421850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.421903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000002f cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.421920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.421976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000020 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.421989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.546 #31 NEW cov: 11999 ft: 14597 corp: 25/177b lim: 10 exec/s: 31 rss: 72Mb L: 10/10 MS: 1 ChangeBit- 00:06:28.546 [2024-05-14 11:42:55.471328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e4b cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.471354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.546 #32 NEW cov: 11999 ft: 14600 corp: 26/180b lim: 10 exec/s: 32 rss: 72Mb L: 3/10 MS: 1 InsertByte- 00:06:28.546 [2024-05-14 11:42:55.511955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.511981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.512035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.512048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.512116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.512130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.512183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000002f cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.512197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.512250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000a000 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.512264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.546 #33 NEW cov: 11999 ft: 14605 corp: 27/190b lim: 10 exec/s: 33 rss: 72Mb L: 10/10 MS: 1 ChangeByte- 00:06:28.546 [2024-05-14 11:42:55.551730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.551755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.551825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff08 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.551839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.546 #34 NEW cov: 11999 ft: 14617 corp: 28/194b lim: 10 exec/s: 34 rss: 72Mb L: 4/10 MS: 1 EraseBytes- 00:06:28.546 [2024-05-14 11:42:55.592059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b3e8 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.592084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.592140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e900 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.592153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.592208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.592225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.592277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.592290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.546 #35 NEW cov: 11999 ft: 14638 corp: 29/202b lim: 10 exec/s: 35 rss: 72Mb L: 8/10 MS: 1 InsertByte- 00:06:28.546 [2024-05-14 11:42:55.632089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b3e8 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.632116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.632172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e9e8 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.632185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.546 [2024-05-14 11:42:55.632240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:28.546 [2024-05-14 11:42:55.632254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.805 #36 NEW cov: 11999 ft: 14647 corp: 30/209b lim: 10 exec/s: 36 rss: 72Mb L: 7/10 MS: 1 ShuffleBytes- 00:06:28.805 [2024-05-14 11:42:55.672180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.672206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.805 [2024-05-14 11:42:55.672262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000002e cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.672275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.805 [2024-05-14 11:42:55.672330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.672344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.805 #37 NEW cov: 11999 ft: 14666 corp: 31/215b lim: 10 exec/s: 37 rss: 72Mb L: 6/10 MS: 1 EraseBytes- 00:06:28.805 [2024-05-14 11:42:55.712511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b3e8 cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.712537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.805 [2024-05-14 11:42:55.712589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e9e8 cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.712602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.805 [2024-05-14 11:42:55.712654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e89a cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.712668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.805 [2024-05-14 11:42:55.712722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000e9e8 cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.712735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.805 [2024-05-14 11:42:55.712787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.712799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.805 #38 NEW cov: 11999 ft: 14673 corp: 32/225b lim: 10 exec/s: 38 rss: 72Mb L: 10/10 MS: 1 ChangeByte- 00:06:28.805 [2024-05-14 11:42:55.752510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.752536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.805 [2024-05-14 11:42:55.752587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000023 cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.752601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.805 [2024-05-14 11:42:55.752656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.752669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.805 [2024-05-14 11:42:55.752720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff08 cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.752733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.805 #39 NEW cov: 11999 ft: 14703 corp: 33/233b lim: 10 exec/s: 39 rss: 72Mb L: 8/10 MS: 1 ChangeByte- 00:06:28.805 [2024-05-14 11:42:55.792639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00004a00 cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.792665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.805 [2024-05-14 11:42:55.792719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.792732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.805 [2024-05-14 11:42:55.792800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.792814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.805 [2024-05-14 11:42:55.792869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.805 [2024-05-14 11:42:55.792882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.805 #40 NEW cov: 11999 ft: 14709 corp: 34/242b lim: 10 exec/s: 40 rss: 72Mb L: 9/10 MS: 1 CrossOver- 00:06:28.806 [2024-05-14 11:42:55.832785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e00 cdw11:00000000 00:06:28.806 [2024-05-14 11:42:55.832811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.806 [2024-05-14 11:42:55.832865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.806 [2024-05-14 11:42:55.832879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.806 [2024-05-14 11:42:55.832931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 00:06:28.806 [2024-05-14 11:42:55.832945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.806 [2024-05-14 11:42:55.832997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.806 [2024-05-14 11:42:55.833010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.806 #42 NEW cov: 11999 ft: 14722 corp: 35/251b lim: 10 exec/s: 42 rss: 72Mb L: 9/10 MS: 2 EraseBytes-CMP- DE: "\000\000\000\000\001\000\000\000"- 00:06:28.806 [2024-05-14 11:42:55.873012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afb cdw11:00000000 00:06:28.806 [2024-05-14 11:42:55.873037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.806 [2024-05-14 11:42:55.873091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:28.806 [2024-05-14 11:42:55.873104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.806 [2024-05-14 11:42:55.873155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:28.806 [2024-05-14 11:42:55.873169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.806 [2024-05-14 11:42:55.873219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000002f cdw11:00000000 00:06:28.806 [2024-05-14 11:42:55.873232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.806 [2024-05-14 11:42:55.873287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:28.806 [2024-05-14 11:42:55.873300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:28.806 #43 NEW cov: 11999 ft: 14729 corp: 36/261b lim: 10 exec/s: 43 rss: 72Mb L: 10/10 MS: 1 CopyPart- 00:06:29.063 [2024-05-14 11:42:55.912968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b3e8 cdw11:00000000 00:06:29.063 [2024-05-14 11:42:55.912993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.063 [2024-05-14 11:42:55.913047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e900 cdw11:00000000 00:06:29.063 [2024-05-14 11:42:55.913060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.063 [2024-05-14 11:42:55.913113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000026e8 cdw11:00000000 00:06:29.063 [2024-05-14 11:42:55.913126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.063 [2024-05-14 11:42:55.913179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:29.063 [2024-05-14 11:42:55.913192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.063 #44 NEW cov: 11999 ft: 14740 corp: 37/270b lim: 10 exec/s: 44 rss: 72Mb L: 9/10 MS: 1 InsertByte- 00:06:29.063 [2024-05-14 11:42:55.953116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 00:06:29.063 [2024-05-14 11:42:55.953141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.063 [2024-05-14 11:42:55.953195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:29.063 [2024-05-14 11:42:55.953209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.063 [2024-05-14 11:42:55.953264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:06:29.063 [2024-05-14 11:42:55.953277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.063 [2024-05-14 11:42:55.953331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 00:06:29.063 [2024-05-14 11:42:55.953347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.063 #45 NEW cov: 11999 ft: 14752 corp: 38/278b lim: 10 exec/s: 45 rss: 72Mb L: 8/10 MS: 1 ChangeBit- 00:06:29.063 [2024-05-14 11:42:55.992846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:29.063 [2024-05-14 11:42:55.992871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.063 #46 NEW cov: 11999 ft: 14770 corp: 39/281b lim: 10 exec/s: 46 rss: 73Mb L: 3/10 MS: 1 EraseBytes- 00:06:29.063 [2024-05-14 11:42:56.033076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:29.063 [2024-05-14 11:42:56.033101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.063 [2024-05-14 11:42:56.033171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e8e8 cdw11:00000000 00:06:29.063 [2024-05-14 11:42:56.033185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.063 #47 NEW cov: 11999 ft: 14775 corp: 40/286b lim: 10 exec/s: 47 rss: 73Mb L: 5/10 MS: 1 ChangeBit- 00:06:29.063 [2024-05-14 11:42:56.073075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000418b cdw11:00000000 00:06:29.063 [2024-05-14 11:42:56.073100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.063 #48 NEW cov: 11999 ft: 14797 corp: 41/288b lim: 10 exec/s: 48 rss: 73Mb L: 2/10 MS: 1 ChangeByte- 00:06:29.063 [2024-05-14 11:42:56.113417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b3e8 cdw11:00000000 00:06:29.063 [2024-05-14 11:42:56.113443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.063 [2024-05-14 11:42:56.113500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e814 cdw11:00000000 00:06:29.063 [2024-05-14 11:42:56.113513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.063 [2024-05-14 11:42:56.113568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e823 cdw11:00000000 00:06:29.063 [2024-05-14 11:42:56.113582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.063 #49 NEW cov: 11999 ft: 14804 corp: 42/295b lim: 10 exec/s: 24 rss: 73Mb L: 7/10 MS: 1 ChangeByte- 00:06:29.063 #49 DONE cov: 11999 ft: 14804 corp: 42/295b lim: 10 exec/s: 24 rss: 73Mb 00:06:29.063 ###### Recommended dictionary. ###### 00:06:29.063 "\000\000\000\000\001\000\000\000" # Uses: 0 00:06:29.063 ###### End of recommended dictionary. ###### 00:06:29.063 Done 49 runs in 2 second(s) 00:06:29.063 [2024-05-14 11:42:56.141182] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:29.322 11:42:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:29.322 [2024-05-14 11:42:56.307994] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:29.322 [2024-05-14 11:42:56.308067] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636034 ] 00:06:29.322 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.580 [2024-05-14 11:42:56.559829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.580 [2024-05-14 11:42:56.651707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.838 [2024-05-14 11:42:56.710828] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.838 [2024-05-14 11:42:56.726786] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:29.838 [2024-05-14 11:42:56.727196] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:29.838 INFO: Running with entropic power schedule (0xFF, 100). 00:06:29.838 INFO: Seed: 3107521393 00:06:29.838 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:29.838 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:29.838 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:29.838 INFO: A corpus is not provided, starting from an empty corpus 00:06:29.838 #2 INITED exec/s: 0 rss: 64Mb 00:06:29.838 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:29.838 This may also happen if the target rejected all inputs we tried so far 00:06:29.838 [2024-05-14 11:42:56.772408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:29.838 [2024-05-14 11:42:56.772436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.097 NEW_FUNC[1/684]: 0x48d220 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:30.097 NEW_FUNC[2/684]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:30.097 #10 NEW cov: 11755 ft: 11756 corp: 2/3b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 3 ShuffleBytes-ShuffleBytes-CrossOver- 00:06:30.097 [2024-05-14 11:42:57.103100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0b cdw11:00000000 00:06:30.097 [2024-05-14 11:42:57.103134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.097 #11 NEW cov: 11885 ft: 12348 corp: 3/5b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 ChangeBit- 00:06:30.097 [2024-05-14 11:42:57.153177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0b cdw11:00000000 00:06:30.097 [2024-05-14 11:42:57.153203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.097 #12 NEW cov: 11891 ft: 12602 corp: 4/7b lim: 10 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 ShuffleBytes- 00:06:30.356 [2024-05-14 11:42:57.193441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:30.356 [2024-05-14 11:42:57.193466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.356 [2024-05-14 11:42:57.193514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:30.356 [2024-05-14 11:42:57.193528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.356 #18 NEW cov: 11976 ft: 13010 corp: 5/11b lim: 10 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 CopyPart- 00:06:30.356 [2024-05-14 11:42:57.233545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:30.356 [2024-05-14 11:42:57.233570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.356 [2024-05-14 11:42:57.233620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000f0a cdw11:00000000 00:06:30.356 [2024-05-14 11:42:57.233633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.356 #19 NEW cov: 11976 ft: 13068 corp: 6/15b lim: 10 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ChangeBinInt- 00:06:30.356 [2024-05-14 11:42:57.273510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008a0a cdw11:00000000 00:06:30.356 [2024-05-14 11:42:57.273535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.356 #20 NEW cov: 11976 ft: 13121 corp: 7/17b lim: 10 exec/s: 0 rss: 71Mb L: 2/4 MS: 1 ChangeBit- 00:06:30.356 [2024-05-14 11:42:57.314024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:30.356 [2024-05-14 11:42:57.314050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.356 [2024-05-14 11:42:57.314102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:30.356 [2024-05-14 11:42:57.314115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.356 [2024-05-14 11:42:57.314166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:30.356 [2024-05-14 11:42:57.314179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.356 [2024-05-14 11:42:57.314229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:30.356 [2024-05-14 11:42:57.314242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.356 #25 NEW cov: 11976 ft: 13499 corp: 8/26b lim: 10 exec/s: 0 rss: 71Mb L: 9/9 MS: 5 CrossOver-ShuffleBytes-ChangeByte-ChangeBit-InsertRepeatedBytes- 00:06:30.356 [2024-05-14 11:42:57.353740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000890b cdw11:00000000 00:06:30.356 [2024-05-14 11:42:57.353765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.356 #26 NEW cov: 11976 ft: 13526 corp: 9/28b lim: 10 exec/s: 0 rss: 71Mb L: 2/9 MS: 1 ChangeByte- 00:06:30.356 [2024-05-14 11:42:57.393952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c70a cdw11:00000000 00:06:30.356 [2024-05-14 11:42:57.393977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.356 [2024-05-14 11:42:57.394029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0f cdw11:00000000 00:06:30.356 [2024-05-14 11:42:57.394042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.356 #27 NEW cov: 11976 ft: 13603 corp: 10/33b lim: 10 exec/s: 0 rss: 71Mb L: 5/9 MS: 1 InsertByte- 00:06:30.356 [2024-05-14 11:42:57.434027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e70a cdw11:00000000 00:06:30.356 [2024-05-14 11:42:57.434052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.615 #28 NEW cov: 11976 ft: 13707 corp: 11/35b lim: 10 exec/s: 0 rss: 71Mb L: 2/9 MS: 1 InsertByte- 00:06:30.615 [2024-05-14 11:42:57.474115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008a0a cdw11:00000000 00:06:30.615 [2024-05-14 11:42:57.474140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.615 #29 NEW cov: 11976 ft: 13722 corp: 12/37b lim: 10 exec/s: 0 rss: 71Mb L: 2/9 MS: 1 ShuffleBytes- 00:06:30.615 [2024-05-14 11:42:57.514249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:30.615 [2024-05-14 11:42:57.514274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.615 #33 NEW cov: 11976 ft: 13759 corp: 13/39b lim: 10 exec/s: 0 rss: 71Mb L: 2/9 MS: 4 ChangeByte-ChangeByte-CrossOver-CopyPart- 00:06:30.615 [2024-05-14 11:42:57.544644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:30.615 [2024-05-14 11:42:57.544669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.615 [2024-05-14 11:42:57.544722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:30.615 [2024-05-14 11:42:57.544735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.615 [2024-05-14 11:42:57.544786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:30.615 [2024-05-14 11:42:57.544799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.615 [2024-05-14 11:42:57.544848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0b cdw11:00000000 00:06:30.615 [2024-05-14 11:42:57.544861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.615 #34 NEW cov: 11976 ft: 13774 corp: 14/47b lim: 10 exec/s: 0 rss: 71Mb L: 8/9 MS: 1 InsertRepeatedBytes- 00:06:30.615 [2024-05-14 11:42:57.584522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000f0a cdw11:00000000 00:06:30.615 [2024-05-14 11:42:57.584547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.615 [2024-05-14 11:42:57.584598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:30.615 [2024-05-14 11:42:57.584611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.615 #35 NEW cov: 11976 ft: 13790 corp: 15/51b lim: 10 exec/s: 0 rss: 71Mb L: 4/9 MS: 1 ShuffleBytes- 00:06:30.615 [2024-05-14 11:42:57.624527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001a0b cdw11:00000000 00:06:30.615 [2024-05-14 11:42:57.624551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.615 #36 NEW cov: 11976 ft: 13867 corp: 16/53b lim: 10 exec/s: 0 rss: 71Mb L: 2/9 MS: 1 ChangeBit- 00:06:30.615 [2024-05-14 11:42:57.664660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000830a cdw11:00000000 00:06:30.615 [2024-05-14 11:42:57.664685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.615 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:30.615 #37 NEW cov: 11999 ft: 13929 corp: 17/56b lim: 10 exec/s: 0 rss: 71Mb L: 3/9 MS: 1 InsertByte- 00:06:30.873 [2024-05-14 11:42:57.705139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.873 [2024-05-14 11:42:57.705164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.873 [2024-05-14 11:42:57.705214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:30.873 [2024-05-14 11:42:57.705227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.873 [2024-05-14 11:42:57.705276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 00:06:30.873 [2024-05-14 11:42:57.705289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.873 [2024-05-14 11:42:57.705337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:30.873 [2024-05-14 11:42:57.705351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.873 #38 NEW cov: 11999 ft: 13959 corp: 18/65b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:06:30.874 [2024-05-14 11:42:57.745070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:30.874 [2024-05-14 11:42:57.745093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.874 [2024-05-14 11:42:57.745159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:30.874 [2024-05-14 11:42:57.745172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.874 [2024-05-14 11:42:57.745219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000b0a cdw11:00000000 00:06:30.874 [2024-05-14 11:42:57.745232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.874 #39 NEW cov: 11999 ft: 14092 corp: 19/72b lim: 10 exec/s: 39 rss: 72Mb L: 7/9 MS: 1 CrossOver- 00:06:30.874 [2024-05-14 11:42:57.785001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000830a cdw11:00000000 00:06:30.874 [2024-05-14 11:42:57.785024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.874 [2024-05-14 11:42:57.785042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000b0b cdw11:00000000 00:06:30.874 [2024-05-14 11:42:57.785052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.874 #40 NEW cov: 11999 ft: 14133 corp: 20/76b lim: 10 exec/s: 40 rss: 72Mb L: 4/9 MS: 1 CopyPart- 00:06:30.874 [2024-05-14 11:42:57.825162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:30.874 [2024-05-14 11:42:57.825190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.874 #41 NEW cov: 11999 ft: 14209 corp: 21/78b lim: 10 exec/s: 41 rss: 72Mb L: 2/9 MS: 1 ShuffleBytes- 00:06:30.874 [2024-05-14 11:42:57.855466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ab9 cdw11:00000000 00:06:30.874 [2024-05-14 11:42:57.855491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.874 [2024-05-14 11:42:57.855544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b9b9 cdw11:00000000 00:06:30.874 [2024-05-14 11:42:57.855557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.874 [2024-05-14 11:42:57.855606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b9b9 cdw11:00000000 00:06:30.874 [2024-05-14 11:42:57.855619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.874 #42 NEW cov: 11999 ft: 14215 corp: 22/85b lim: 10 exec/s: 42 rss: 72Mb L: 7/9 MS: 1 InsertRepeatedBytes- 00:06:30.874 [2024-05-14 11:42:57.895547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c70a cdw11:00000000 00:06:30.874 [2024-05-14 11:42:57.895572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.874 [2024-05-14 11:42:57.895622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a3d cdw11:00000000 00:06:30.874 [2024-05-14 11:42:57.895636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.874 [2024-05-14 11:42:57.895684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000f0a cdw11:00000000 00:06:30.874 [2024-05-14 11:42:57.895713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.874 #43 NEW cov: 11999 ft: 14222 corp: 23/91b lim: 10 exec/s: 43 rss: 72Mb L: 6/9 MS: 1 InsertByte- 00:06:30.874 [2024-05-14 11:42:57.935476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001a03 cdw11:00000000 00:06:30.874 [2024-05-14 11:42:57.935500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.874 #44 NEW cov: 11999 ft: 14229 corp: 24/93b lim: 10 exec/s: 44 rss: 72Mb L: 2/9 MS: 1 ChangeBit- 00:06:31.132 [2024-05-14 11:42:57.975899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:31.132 [2024-05-14 11:42:57.975923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.133 [2024-05-14 11:42:57.975973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000aff cdw11:00000000 00:06:31.133 [2024-05-14 11:42:57.975986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.133 [2024-05-14 11:42:57.976034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.133 [2024-05-14 11:42:57.976047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.133 [2024-05-14 11:42:57.976095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.133 [2024-05-14 11:42:57.976108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.133 #45 NEW cov: 11999 ft: 14242 corp: 25/101b lim: 10 exec/s: 45 rss: 72Mb L: 8/9 MS: 1 CrossOver- 00:06:31.133 [2024-05-14 11:42:58.015790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000f0a cdw11:00000000 00:06:31.133 [2024-05-14 11:42:58.015814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.133 [2024-05-14 11:42:58.015863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a02 cdw11:00000000 00:06:31.133 [2024-05-14 11:42:58.015876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.133 #46 NEW cov: 11999 ft: 14246 corp: 26/105b lim: 10 exec/s: 46 rss: 72Mb L: 4/9 MS: 1 ChangeBit- 00:06:31.133 [2024-05-14 11:42:58.055901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c70a cdw11:00000000 00:06:31.133 [2024-05-14 11:42:58.055925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.133 [2024-05-14 11:42:58.055991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a0f cdw11:00000000 00:06:31.133 [2024-05-14 11:42:58.056004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.133 #47 NEW cov: 11999 ft: 14252 corp: 27/110b lim: 10 exec/s: 47 rss: 72Mb L: 5/9 MS: 1 CopyPart- 00:06:31.133 [2024-05-14 11:42:58.096247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001aff cdw11:00000000 00:06:31.133 [2024-05-14 11:42:58.096272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.133 [2024-05-14 11:42:58.096320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.133 [2024-05-14 11:42:58.096333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.133 [2024-05-14 11:42:58.096385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.133 [2024-05-14 11:42:58.096398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.133 [2024-05-14 11:42:58.096449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff03 cdw11:00000000 00:06:31.133 [2024-05-14 11:42:58.096461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.133 #48 NEW cov: 11999 ft: 14261 corp: 28/118b lim: 10 exec/s: 48 rss: 72Mb L: 8/9 MS: 1 InsertRepeatedBytes- 00:06:31.133 [2024-05-14 11:42:58.136470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.133 [2024-05-14 11:42:58.136495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.133 [2024-05-14 11:42:58.136545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.133 [2024-05-14 11:42:58.136558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.133 [2024-05-14 11:42:58.136604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.133 [2024-05-14 11:42:58.136617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.133 [2024-05-14 11:42:58.136667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.133 [2024-05-14 11:42:58.136679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.133 [2024-05-14 11:42:58.136727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00002340 cdw11:00000000 00:06:31.133 [2024-05-14 11:42:58.136742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.133 #49 NEW cov: 11999 ft: 14312 corp: 29/128b lim: 10 exec/s: 49 rss: 72Mb L: 10/10 MS: 1 InsertByte- 00:06:31.133 [2024-05-14 11:42:58.186171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000cf0a cdw11:00000000 00:06:31.133 [2024-05-14 11:42:58.186196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.133 #50 NEW cov: 11999 ft: 14366 corp: 30/130b lim: 10 exec/s: 50 rss: 72Mb L: 2/10 MS: 1 ChangeByte- 00:06:31.391 [2024-05-14 11:42:58.226761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:31.391 [2024-05-14 11:42:58.226786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.391 [2024-05-14 11:42:58.226836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000aff cdw11:00000000 00:06:31.391 [2024-05-14 11:42:58.226849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.391 [2024-05-14 11:42:58.226899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.391 [2024-05-14 11:42:58.226912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.392 [2024-05-14 11:42:58.226959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.226971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.392 [2024-05-14 11:42:58.227019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0b cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.227032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.392 #51 NEW cov: 11999 ft: 14400 corp: 31/140b lim: 10 exec/s: 51 rss: 73Mb L: 10/10 MS: 1 CrossOver- 00:06:31.392 [2024-05-14 11:42:58.266634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.266660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.392 [2024-05-14 11:42:58.266727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000f0a cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.266741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.392 [2024-05-14 11:42:58.266791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.266804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.392 #52 NEW cov: 11999 ft: 14404 corp: 32/146b lim: 10 exec/s: 52 rss: 73Mb L: 6/10 MS: 1 EraseBytes- 00:06:31.392 [2024-05-14 11:42:58.306975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002bff cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.307000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.392 [2024-05-14 11:42:58.307068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.307081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.392 [2024-05-14 11:42:58.307130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.307146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.392 [2024-05-14 11:42:58.307196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.307209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.392 [2024-05-14 11:42:58.307257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff40 cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.307270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.392 #53 NEW cov: 11999 ft: 14419 corp: 33/156b lim: 10 exec/s: 53 rss: 73Mb L: 10/10 MS: 1 InsertByte- 00:06:31.392 [2024-05-14 11:42:58.346969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.346996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.392 [2024-05-14 11:42:58.347046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.347060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.392 [2024-05-14 11:42:58.347109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.347122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.392 [2024-05-14 11:42:58.347170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002340 cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.347183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.392 #54 NEW cov: 11999 ft: 14452 corp: 34/164b lim: 10 exec/s: 54 rss: 73Mb L: 8/10 MS: 1 EraseBytes- 00:06:31.392 [2024-05-14 11:42:58.386742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000b0a cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.386767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.392 #55 NEW cov: 11999 ft: 14454 corp: 35/167b lim: 10 exec/s: 55 rss: 73Mb L: 3/10 MS: 1 CrossOver- 00:06:31.392 [2024-05-14 11:42:58.417253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.417277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.392 [2024-05-14 11:42:58.417329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.417342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.392 [2024-05-14 11:42:58.417396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.417410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.392 [2024-05-14 11:42:58.417474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.417487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.392 [2024-05-14 11:42:58.417537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00002340 cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.417554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.392 #56 NEW cov: 11999 ft: 14458 corp: 36/177b lim: 10 exec/s: 56 rss: 73Mb L: 10/10 MS: 1 ShuffleBytes- 00:06:31.392 [2024-05-14 11:42:58.456932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000140a cdw11:00000000 00:06:31.392 [2024-05-14 11:42:58.456958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.392 #57 NEW cov: 11999 ft: 14509 corp: 37/179b lim: 10 exec/s: 57 rss: 73Mb L: 2/10 MS: 1 InsertByte- 00:06:31.651 [2024-05-14 11:42:58.487121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000f0a cdw11:00000000 00:06:31.651 [2024-05-14 11:42:58.487147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.651 [2024-05-14 11:42:58.487197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000750a cdw11:00000000 00:06:31.651 [2024-05-14 11:42:58.487210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.651 #58 NEW cov: 11999 ft: 14516 corp: 38/184b lim: 10 exec/s: 58 rss: 73Mb L: 5/10 MS: 1 InsertByte- 00:06:31.651 [2024-05-14 11:42:58.527338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.651 [2024-05-14 11:42:58.527364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.651 [2024-05-14 11:42:58.527418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.651 [2024-05-14 11:42:58.527432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.651 [2024-05-14 11:42:58.527480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff40 cdw11:00000000 00:06:31.651 [2024-05-14 11:42:58.527494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.651 #59 NEW cov: 11999 ft: 14572 corp: 39/190b lim: 10 exec/s: 59 rss: 73Mb L: 6/10 MS: 1 EraseBytes- 00:06:31.651 [2024-05-14 11:42:58.567239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001e03 cdw11:00000000 00:06:31.651 [2024-05-14 11:42:58.567264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.651 #60 NEW cov: 11999 ft: 14582 corp: 40/192b lim: 10 exec/s: 60 rss: 73Mb L: 2/10 MS: 1 ChangeBit- 00:06:31.651 [2024-05-14 11:42:58.607607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000830a cdw11:00000000 00:06:31.651 [2024-05-14 11:42:58.607634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.651 [2024-05-14 11:42:58.607688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006868 cdw11:00000000 00:06:31.651 [2024-05-14 11:42:58.607701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.651 [2024-05-14 11:42:58.607751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000680b cdw11:00000000 00:06:31.651 [2024-05-14 11:42:58.607765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.651 #61 NEW cov: 11999 ft: 14593 corp: 41/198b lim: 10 exec/s: 61 rss: 73Mb L: 6/10 MS: 1 InsertRepeatedBytes- 00:06:31.651 [2024-05-14 11:42:58.647569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e70a cdw11:00000000 00:06:31.651 [2024-05-14 11:42:58.647594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.651 [2024-05-14 11:42:58.647648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e70a cdw11:00000000 00:06:31.651 [2024-05-14 11:42:58.647661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.651 #62 NEW cov: 11999 ft: 14615 corp: 42/202b lim: 10 exec/s: 62 rss: 73Mb L: 4/10 MS: 1 CopyPart- 00:06:31.651 [2024-05-14 11:42:58.687589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008a0a cdw11:00000000 00:06:31.651 [2024-05-14 11:42:58.687615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.651 #63 NEW cov: 11999 ft: 14617 corp: 43/205b lim: 10 exec/s: 63 rss: 73Mb L: 3/10 MS: 1 CopyPart- 00:06:31.651 [2024-05-14 11:42:58.727947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.651 [2024-05-14 11:42:58.727972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.651 [2024-05-14 11:42:58.728022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.651 [2024-05-14 11:42:58.728035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.651 [2024-05-14 11:42:58.728085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff40 cdw11:00000000 00:06:31.651 [2024-05-14 11:42:58.728114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.910 #64 NEW cov: 11999 ft: 14630 corp: 44/211b lim: 10 exec/s: 64 rss: 73Mb L: 6/10 MS: 1 ShuffleBytes- 00:06:31.910 [2024-05-14 11:42:58.778256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008aff cdw11:00000000 00:06:31.910 [2024-05-14 11:42:58.778282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.910 [2024-05-14 11:42:58.778331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.910 [2024-05-14 11:42:58.778344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.910 [2024-05-14 11:42:58.778399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.910 [2024-05-14 11:42:58.778413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.910 [2024-05-14 11:42:58.778461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.910 [2024-05-14 11:42:58.778474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.910 [2024-05-14 11:42:58.778524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:31.910 [2024-05-14 11:42:58.778536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.910 #66 NEW cov: 11999 ft: 14649 corp: 45/221b lim: 10 exec/s: 33 rss: 73Mb L: 10/10 MS: 2 EraseBytes-InsertRepeatedBytes- 00:06:31.910 #66 DONE cov: 11999 ft: 14649 corp: 45/221b lim: 10 exec/s: 33 rss: 73Mb 00:06:31.910 Done 66 runs in 2 second(s) 00:06:31.910 [2024-05-14 11:42:58.800260] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:31.910 11:42:58 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:31.910 [2024-05-14 11:42:58.966757] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:31.910 [2024-05-14 11:42:58.966848] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636567 ] 00:06:31.910 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.169 [2024-05-14 11:42:59.218833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.428 [2024-05-14 11:42:59.310689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.428 [2024-05-14 11:42:59.369667] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.428 [2024-05-14 11:42:59.385627] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:32.428 [2024-05-14 11:42:59.386013] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:32.428 INFO: Running with entropic power schedule (0xFF, 100). 00:06:32.428 INFO: Seed: 1471553472 00:06:32.428 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:32.428 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:32.428 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:32.428 INFO: A corpus is not provided, starting from an empty corpus 00:06:32.429 [2024-05-14 11:42:59.451243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.429 [2024-05-14 11:42:59.451270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.429 #2 INITED cov: 11783 ft: 11779 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:32.429 [2024-05-14 11:42:59.481681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.429 [2024-05-14 11:42:59.481709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.429 [2024-05-14 11:42:59.481767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.429 [2024-05-14 11:42:59.481781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.429 [2024-05-14 11:42:59.481853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.429 [2024-05-14 11:42:59.481867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.429 [2024-05-14 11:42:59.481921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.429 [2024-05-14 11:42:59.481935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.429 #3 NEW cov: 11913 ft: 13078 corp: 2/5b lim: 5 exec/s: 0 rss: 70Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:32.687 [2024-05-14 11:42:59.531999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.532023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.687 [2024-05-14 11:42:59.532079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.532092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.687 [2024-05-14 11:42:59.532146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.532159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.687 [2024-05-14 11:42:59.532212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.532225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.687 [2024-05-14 11:42:59.532277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.532290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.687 #4 NEW cov: 11919 ft: 13423 corp: 3/10b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CopyPart- 00:06:32.687 [2024-05-14 11:42:59.581822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.581847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.687 [2024-05-14 11:42:59.581919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.581933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.687 [2024-05-14 11:42:59.581989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.582007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.687 #5 NEW cov: 12004 ft: 13863 corp: 4/13b lim: 5 exec/s: 0 rss: 70Mb L: 3/5 MS: 1 EraseBytes- 00:06:32.687 [2024-05-14 11:42:59.621606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.621631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.687 #6 NEW cov: 12004 ft: 13931 corp: 5/14b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ChangeByte- 00:06:32.687 [2024-05-14 11:42:59.661742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.661768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.687 #7 NEW cov: 12004 ft: 14038 corp: 6/15b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:32.687 [2024-05-14 11:42:59.702459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.702483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.687 [2024-05-14 11:42:59.702538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.702552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.687 [2024-05-14 11:42:59.702606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.702620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.687 [2024-05-14 11:42:59.702672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.702685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.687 [2024-05-14 11:42:59.702737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.702750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.687 #8 NEW cov: 12004 ft: 14099 corp: 7/20b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 CopyPart- 00:06:32.687 [2024-05-14 11:42:59.751986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.687 [2024-05-14 11:42:59.752011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.946 #9 NEW cov: 12004 ft: 14157 corp: 8/21b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:32.946 [2024-05-14 11:42:59.792702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.792726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.946 [2024-05-14 11:42:59.792798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.792812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.946 [2024-05-14 11:42:59.792869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.792883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.946 [2024-05-14 11:42:59.792936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.792950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.946 [2024-05-14 11:42:59.793004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.793017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.946 #10 NEW cov: 12004 ft: 14271 corp: 9/26b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:32.946 [2024-05-14 11:42:59.842245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.842271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.946 #11 NEW cov: 12004 ft: 14332 corp: 10/27b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 ChangeByte- 00:06:32.946 [2024-05-14 11:42:59.882314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.882340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.946 #12 NEW cov: 12004 ft: 14346 corp: 11/28b lim: 5 exec/s: 0 rss: 71Mb L: 1/5 MS: 1 ChangeByte- 00:06:32.946 [2024-05-14 11:42:59.922867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.922892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.946 [2024-05-14 11:42:59.922945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.922958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.946 [2024-05-14 11:42:59.923012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.923025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.946 [2024-05-14 11:42:59.923077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.923091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.946 #13 NEW cov: 12004 ft: 14395 corp: 12/32b lim: 5 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 ChangeBit- 00:06:32.946 [2024-05-14 11:42:59.963174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.963199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.946 [2024-05-14 11:42:59.963255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.963272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.946 [2024-05-14 11:42:59.963324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.963337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.946 [2024-05-14 11:42:59.963398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.963411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.946 [2024-05-14 11:42:59.963466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.946 [2024-05-14 11:42:59.963478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.946 #14 NEW cov: 12004 ft: 14406 corp: 13/37b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 ChangeByte- 00:06:32.947 [2024-05-14 11:43:00.003314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.947 [2024-05-14 11:43:00.003339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.947 [2024-05-14 11:43:00.003398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.947 [2024-05-14 11:43:00.003412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.947 [2024-05-14 11:43:00.003466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.947 [2024-05-14 11:43:00.003480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.947 [2024-05-14 11:43:00.003533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.947 [2024-05-14 11:43:00.003546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.947 [2024-05-14 11:43:00.003601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.947 [2024-05-14 11:43:00.003615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.947 #15 NEW cov: 12004 ft: 14420 corp: 14/42b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 ChangeBit- 00:06:33.206 [2024-05-14 11:43:00.043295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.206 [2024-05-14 11:43:00.043323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.206 [2024-05-14 11:43:00.043386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.206 [2024-05-14 11:43:00.043401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.206 [2024-05-14 11:43:00.043458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.206 [2024-05-14 11:43:00.043479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.206 [2024-05-14 11:43:00.043536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.206 [2024-05-14 11:43:00.043550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.206 #16 NEW cov: 12004 ft: 14436 corp: 15/46b lim: 5 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 InsertByte- 00:06:33.206 [2024-05-14 11:43:00.093597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.206 [2024-05-14 11:43:00.093627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.206 [2024-05-14 11:43:00.093684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.206 [2024-05-14 11:43:00.093698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.206 [2024-05-14 11:43:00.093751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.206 [2024-05-14 11:43:00.093765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.206 [2024-05-14 11:43:00.093821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.206 [2024-05-14 11:43:00.093834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.206 [2024-05-14 11:43:00.093889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.206 [2024-05-14 11:43:00.093903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.206 #17 NEW cov: 12004 ft: 14555 corp: 16/51b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CrossOver- 00:06:33.206 [2024-05-14 11:43:00.143384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.206 [2024-05-14 11:43:00.143411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.206 [2024-05-14 11:43:00.143469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.206 [2024-05-14 11:43:00.143483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.206 [2024-05-14 11:43:00.143539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.206 [2024-05-14 11:43:00.143553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.206 #18 NEW cov: 12004 ft: 14566 corp: 17/54b lim: 5 exec/s: 0 rss: 71Mb L: 3/5 MS: 1 ChangeByte- 00:06:33.206 [2024-05-14 11:43:00.183321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.206 [2024-05-14 11:43:00.183346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.206 [2024-05-14 11:43:00.183408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.206 [2024-05-14 11:43:00.183425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.206 #19 NEW cov: 12004 ft: 14736 corp: 18/56b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CopyPart- 00:06:33.206 [2024-05-14 11:43:00.223767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.206 [2024-05-14 11:43:00.223793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.206 [2024-05-14 11:43:00.223850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.207 [2024-05-14 11:43:00.223863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.207 [2024-05-14 11:43:00.223918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.207 [2024-05-14 11:43:00.223932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.207 [2024-05-14 11:43:00.223986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.207 [2024-05-14 11:43:00.223999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.207 #20 NEW cov: 12004 ft: 14825 corp: 19/60b lim: 5 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 CopyPart- 00:06:33.207 [2024-05-14 11:43:00.273633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.207 [2024-05-14 11:43:00.273658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.207 [2024-05-14 11:43:00.273715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.207 [2024-05-14 11:43:00.273729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.465 #21 NEW cov: 12004 ft: 14862 corp: 20/62b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 InsertByte- 00:06:33.465 [2024-05-14 11:43:00.324172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.465 [2024-05-14 11:43:00.324214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.465 [2024-05-14 11:43:00.324269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.465 [2024-05-14 11:43:00.324283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.465 [2024-05-14 11:43:00.324335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.465 [2024-05-14 11:43:00.324349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.465 [2024-05-14 11:43:00.324405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.465 [2024-05-14 11:43:00.324419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.465 [2024-05-14 11:43:00.324474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.465 [2024-05-14 11:43:00.324491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.724 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:33.724 #22 NEW cov: 12027 ft: 14896 corp: 21/67b lim: 5 exec/s: 22 rss: 72Mb L: 5/5 MS: 1 ChangeByte- 00:06:33.724 [2024-05-14 11:43:00.655232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.724 [2024-05-14 11:43:00.655289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.724 [2024-05-14 11:43:00.655374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.724 [2024-05-14 11:43:00.655409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.724 [2024-05-14 11:43:00.655488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.724 [2024-05-14 11:43:00.655514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.724 [2024-05-14 11:43:00.655593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.724 [2024-05-14 11:43:00.655619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.724 #23 NEW cov: 12027 ft: 15031 corp: 22/71b lim: 5 exec/s: 23 rss: 73Mb L: 4/5 MS: 1 ChangeByte- 00:06:33.724 [2024-05-14 11:43:00.704547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.724 [2024-05-14 11:43:00.704572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.724 #24 NEW cov: 12027 ft: 15059 corp: 23/72b lim: 5 exec/s: 24 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:06:33.724 [2024-05-14 11:43:00.744674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.724 [2024-05-14 11:43:00.744699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.724 #25 NEW cov: 12027 ft: 15071 corp: 24/73b lim: 5 exec/s: 25 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:06:33.724 [2024-05-14 11:43:00.784743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.724 [2024-05-14 11:43:00.784768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.724 #26 NEW cov: 12027 ft: 15082 corp: 25/74b lim: 5 exec/s: 26 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:06:33.983 [2024-05-14 11:43:00.835370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.983 [2024-05-14 11:43:00.835400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.983 [2024-05-14 11:43:00.835470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.983 [2024-05-14 11:43:00.835485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.983 [2024-05-14 11:43:00.835549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.983 [2024-05-14 11:43:00.835567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.983 [2024-05-14 11:43:00.835620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.983 [2024-05-14 11:43:00.835633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.983 #27 NEW cov: 12027 ft: 15141 corp: 26/78b lim: 5 exec/s: 27 rss: 73Mb L: 4/5 MS: 1 ChangeByte- 00:06:33.983 [2024-05-14 11:43:00.885018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.983 [2024-05-14 11:43:00.885044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.983 #28 NEW cov: 12027 ft: 15144 corp: 27/79b lim: 5 exec/s: 28 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:33.983 [2024-05-14 11:43:00.915766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.983 [2024-05-14 11:43:00.915790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.983 [2024-05-14 11:43:00.915845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.983 [2024-05-14 11:43:00.915859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.983 [2024-05-14 11:43:00.915911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.983 [2024-05-14 11:43:00.915941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.983 [2024-05-14 11:43:00.915999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.983 [2024-05-14 11:43:00.916012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.983 [2024-05-14 11:43:00.916066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.983 [2024-05-14 11:43:00.916079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.983 #29 NEW cov: 12027 ft: 15146 corp: 28/84b lim: 5 exec/s: 29 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:06:33.983 [2024-05-14 11:43:00.955740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.983 [2024-05-14 11:43:00.955765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.983 [2024-05-14 11:43:00.955820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.983 [2024-05-14 11:43:00.955833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.983 [2024-05-14 11:43:00.955889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.983 [2024-05-14 11:43:00.955903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.983 [2024-05-14 11:43:00.955958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.983 [2024-05-14 11:43:00.955971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.983 #30 NEW cov: 12027 ft: 15171 corp: 29/88b lim: 5 exec/s: 30 rss: 73Mb L: 4/5 MS: 1 CrossOver- 00:06:33.983 [2024-05-14 11:43:00.996002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.984 [2024-05-14 11:43:00.996027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.984 [2024-05-14 11:43:00.996082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.984 [2024-05-14 11:43:00.996096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.984 [2024-05-14 11:43:00.996150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.984 [2024-05-14 11:43:00.996164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.984 [2024-05-14 11:43:00.996217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.984 [2024-05-14 11:43:00.996230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.984 [2024-05-14 11:43:00.996284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.984 [2024-05-14 11:43:00.996297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.984 #31 NEW cov: 12027 ft: 15181 corp: 30/93b lim: 5 exec/s: 31 rss: 73Mb L: 5/5 MS: 1 ChangeBit- 00:06:33.984 [2024-05-14 11:43:01.045575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.984 [2024-05-14 11:43:01.045599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.984 #32 NEW cov: 12027 ft: 15190 corp: 31/94b lim: 5 exec/s: 32 rss: 73Mb L: 1/5 MS: 1 CopyPart- 00:06:34.243 [2024-05-14 11:43:01.085655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.243 [2024-05-14 11:43:01.085680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.243 #33 NEW cov: 12027 ft: 15211 corp: 32/95b lim: 5 exec/s: 33 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:06:34.243 [2024-05-14 11:43:01.125757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.243 [2024-05-14 11:43:01.125781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.243 #34 NEW cov: 12027 ft: 15219 corp: 33/96b lim: 5 exec/s: 34 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:06:34.243 [2024-05-14 11:43:01.166338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.243 [2024-05-14 11:43:01.166363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.243 [2024-05-14 11:43:01.166420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.243 [2024-05-14 11:43:01.166437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.243 [2024-05-14 11:43:01.166504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.243 [2024-05-14 11:43:01.166518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.243 [2024-05-14 11:43:01.166570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.243 [2024-05-14 11:43:01.166583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.243 #35 NEW cov: 12027 ft: 15224 corp: 34/100b lim: 5 exec/s: 35 rss: 73Mb L: 4/5 MS: 1 CopyPart- 00:06:34.243 [2024-05-14 11:43:01.206470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.243 [2024-05-14 11:43:01.206494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.243 [2024-05-14 11:43:01.206548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.243 [2024-05-14 11:43:01.206562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.243 [2024-05-14 11:43:01.206617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.243 [2024-05-14 11:43:01.206631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.243 [2024-05-14 11:43:01.206683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.243 [2024-05-14 11:43:01.206696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.243 #36 NEW cov: 12027 ft: 15257 corp: 35/104b lim: 5 exec/s: 36 rss: 73Mb L: 4/5 MS: 1 ChangeBit- 00:06:34.243 [2024-05-14 11:43:01.256347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.243 [2024-05-14 11:43:01.256371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.243 [2024-05-14 11:43:01.256457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.243 [2024-05-14 11:43:01.256471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.243 #37 NEW cov: 12027 ft: 15302 corp: 36/106b lim: 5 exec/s: 37 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:06:34.243 [2024-05-14 11:43:01.306619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.243 [2024-05-14 11:43:01.306644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.243 [2024-05-14 11:43:01.306714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.243 [2024-05-14 11:43:01.306728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.243 [2024-05-14 11:43:01.306788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.243 [2024-05-14 11:43:01.306801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.243 #38 NEW cov: 12027 ft: 15314 corp: 37/109b lim: 5 exec/s: 38 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:06:34.502 [2024-05-14 11:43:01.346990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.502 [2024-05-14 11:43:01.347015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.502 [2024-05-14 11:43:01.347069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.502 [2024-05-14 11:43:01.347082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.502 [2024-05-14 11:43:01.347137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.502 [2024-05-14 11:43:01.347150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.502 [2024-05-14 11:43:01.347202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.502 [2024-05-14 11:43:01.347215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.503 [2024-05-14 11:43:01.347270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.503 [2024-05-14 11:43:01.347284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.503 #39 NEW cov: 12027 ft: 15318 corp: 38/114b lim: 5 exec/s: 39 rss: 73Mb L: 5/5 MS: 1 InsertByte- 00:06:34.503 [2024-05-14 11:43:01.387136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.503 [2024-05-14 11:43:01.387162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.503 [2024-05-14 11:43:01.387233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.503 [2024-05-14 11:43:01.387248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.503 [2024-05-14 11:43:01.387302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.503 [2024-05-14 11:43:01.387316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.503 [2024-05-14 11:43:01.387369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.503 [2024-05-14 11:43:01.387388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.503 [2024-05-14 11:43:01.387441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.503 [2024-05-14 11:43:01.387456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.503 #40 NEW cov: 12027 ft: 15322 corp: 39/119b lim: 5 exec/s: 40 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:06:34.503 [2024-05-14 11:43:01.427231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.503 [2024-05-14 11:43:01.427257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.503 [2024-05-14 11:43:01.427311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.503 [2024-05-14 11:43:01.427324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.503 [2024-05-14 11:43:01.427396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.503 [2024-05-14 11:43:01.427411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.503 [2024-05-14 11:43:01.427467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.503 [2024-05-14 11:43:01.427480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.503 [2024-05-14 11:43:01.427534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:34.503 [2024-05-14 11:43:01.427547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:34.503 #41 NEW cov: 12027 ft: 15323 corp: 40/124b lim: 5 exec/s: 20 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:06:34.503 #41 DONE cov: 12027 ft: 15323 corp: 40/124b lim: 5 exec/s: 20 rss: 73Mb 00:06:34.503 Done 41 runs in 2 second(s) 00:06:34.503 [2024-05-14 11:43:01.456885] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:34.503 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:34.762 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:34.762 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:34.762 11:43:01 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:34.762 [2024-05-14 11:43:01.611258] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:34.762 [2024-05-14 11:43:01.611314] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636956 ] 00:06:34.762 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.762 [2024-05-14 11:43:01.795210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.044 [2024-05-14 11:43:01.864117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.044 [2024-05-14 11:43:01.923211] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.044 [2024-05-14 11:43:01.939181] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:35.044 [2024-05-14 11:43:01.939598] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:35.044 INFO: Running with entropic power schedule (0xFF, 100). 00:06:35.044 INFO: Seed: 4024556160 00:06:35.044 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:35.044 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:35.044 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:35.044 INFO: A corpus is not provided, starting from an empty corpus 00:06:35.044 [2024-05-14 11:43:01.984768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.044 [2024-05-14 11:43:01.984796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.044 #2 INITED cov: 11783 ft: 11781 corp: 1/1b exec/s: 0 rss: 70Mb 00:06:35.044 [2024-05-14 11:43:02.014875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.044 [2024-05-14 11:43:02.014901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.044 [2024-05-14 11:43:02.014974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.044 [2024-05-14 11:43:02.014988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.044 #3 NEW cov: 11913 ft: 13117 corp: 2/3b lim: 5 exec/s: 0 rss: 70Mb L: 2/2 MS: 1 InsertByte- 00:06:35.044 [2024-05-14 11:43:02.064899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.044 [2024-05-14 11:43:02.064924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.044 #4 NEW cov: 11919 ft: 13319 corp: 3/4b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 CopyPart- 00:06:35.044 [2024-05-14 11:43:02.105010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.044 [2024-05-14 11:43:02.105035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.303 #5 NEW cov: 12004 ft: 13534 corp: 4/5b lim: 5 exec/s: 0 rss: 70Mb L: 1/2 MS: 1 ChangeBit- 00:06:35.303 [2024-05-14 11:43:02.145290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.303 [2024-05-14 11:43:02.145318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.304 [2024-05-14 11:43:02.145374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.304 [2024-05-14 11:43:02.145392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.304 #6 NEW cov: 12004 ft: 13659 corp: 5/7b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 ChangeBit- 00:06:35.304 [2024-05-14 11:43:02.195467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.304 [2024-05-14 11:43:02.195492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.304 [2024-05-14 11:43:02.195564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.304 [2024-05-14 11:43:02.195578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.304 #7 NEW cov: 12004 ft: 13730 corp: 6/9b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 ChangeByte- 00:06:35.304 [2024-05-14 11:43:02.245619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.304 [2024-05-14 11:43:02.245644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.304 [2024-05-14 11:43:02.245699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.304 [2024-05-14 11:43:02.245713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.304 #8 NEW cov: 12004 ft: 13788 corp: 7/11b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 ShuffleBytes- 00:06:35.304 [2024-05-14 11:43:02.295576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.304 [2024-05-14 11:43:02.295601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.304 #9 NEW cov: 12004 ft: 13842 corp: 8/12b lim: 5 exec/s: 0 rss: 71Mb L: 1/2 MS: 1 ChangeByte- 00:06:35.304 [2024-05-14 11:43:02.335827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.304 [2024-05-14 11:43:02.335853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.304 [2024-05-14 11:43:02.335909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.304 [2024-05-14 11:43:02.335923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.304 #10 NEW cov: 12004 ft: 13867 corp: 9/14b lim: 5 exec/s: 0 rss: 71Mb L: 2/2 MS: 1 InsertByte- 00:06:35.304 [2024-05-14 11:43:02.376098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.304 [2024-05-14 11:43:02.376123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.304 [2024-05-14 11:43:02.376180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.304 [2024-05-14 11:43:02.376193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.304 [2024-05-14 11:43:02.376253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.304 [2024-05-14 11:43:02.376267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.562 #11 NEW cov: 12004 ft: 14093 corp: 10/17b lim: 5 exec/s: 0 rss: 71Mb L: 3/3 MS: 1 InsertByte- 00:06:35.562 [2024-05-14 11:43:02.416207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.562 [2024-05-14 11:43:02.416234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.562 [2024-05-14 11:43:02.416308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.562 [2024-05-14 11:43:02.416323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.562 [2024-05-14 11:43:02.416392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.562 [2024-05-14 11:43:02.416406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.562 #12 NEW cov: 12004 ft: 14139 corp: 11/20b lim: 5 exec/s: 0 rss: 71Mb L: 3/3 MS: 1 ShuffleBytes- 00:06:35.562 [2024-05-14 11:43:02.466191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.562 [2024-05-14 11:43:02.466217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.562 [2024-05-14 11:43:02.466271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.562 [2024-05-14 11:43:02.466285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.562 #13 NEW cov: 12004 ft: 14174 corp: 12/22b lim: 5 exec/s: 0 rss: 71Mb L: 2/3 MS: 1 EraseBytes- 00:06:35.562 [2024-05-14 11:43:02.506779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.562 [2024-05-14 11:43:02.506804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.562 [2024-05-14 11:43:02.506876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.562 [2024-05-14 11:43:02.506890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.562 [2024-05-14 11:43:02.506944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.562 [2024-05-14 11:43:02.506957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.562 [2024-05-14 11:43:02.507013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.562 [2024-05-14 11:43:02.507027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.562 [2024-05-14 11:43:02.507082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.562 [2024-05-14 11:43:02.507099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.562 #14 NEW cov: 12004 ft: 14491 corp: 13/27b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CopyPart- 00:06:35.562 [2024-05-14 11:43:02.546470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.562 [2024-05-14 11:43:02.546495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.562 [2024-05-14 11:43:02.546567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.562 [2024-05-14 11:43:02.546582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.562 #15 NEW cov: 12004 ft: 14550 corp: 14/29b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ChangeBinInt- 00:06:35.562 [2024-05-14 11:43:02.596596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.562 [2024-05-14 11:43:02.596621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.563 [2024-05-14 11:43:02.596677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.563 [2024-05-14 11:43:02.596690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.563 #16 NEW cov: 12004 ft: 14566 corp: 15/31b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:35.563 [2024-05-14 11:43:02.647132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.563 [2024-05-14 11:43:02.647156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.563 [2024-05-14 11:43:02.647227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.563 [2024-05-14 11:43:02.647241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.563 [2024-05-14 11:43:02.647298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.563 [2024-05-14 11:43:02.647311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.563 [2024-05-14 11:43:02.647366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.563 [2024-05-14 11:43:02.647383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.563 [2024-05-14 11:43:02.647440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.563 [2024-05-14 11:43:02.647453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:35.821 #17 NEW cov: 12004 ft: 14584 corp: 16/36b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:35.821 [2024-05-14 11:43:02.696817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.821 [2024-05-14 11:43:02.696842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.821 [2024-05-14 11:43:02.696902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.821 [2024-05-14 11:43:02.696915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.821 #18 NEW cov: 12004 ft: 14594 corp: 17/38b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ChangeByte- 00:06:35.821 [2024-05-14 11:43:02.736770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.821 [2024-05-14 11:43:02.736794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.821 #19 NEW cov: 12004 ft: 14613 corp: 18/39b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ChangeBit- 00:06:35.821 [2024-05-14 11:43:02.777351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.821 [2024-05-14 11:43:02.777376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.821 [2024-05-14 11:43:02.777437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.821 [2024-05-14 11:43:02.777451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.821 [2024-05-14 11:43:02.777503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.821 [2024-05-14 11:43:02.777516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.821 [2024-05-14 11:43:02.777569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.821 [2024-05-14 11:43:02.777581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.821 #20 NEW cov: 12004 ft: 14720 corp: 19/43b lim: 5 exec/s: 0 rss: 72Mb L: 4/5 MS: 1 CopyPart- 00:06:35.821 [2024-05-14 11:43:02.827081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.821 [2024-05-14 11:43:02.827106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.821 #21 NEW cov: 12004 ft: 14737 corp: 20/44b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:06:35.821 [2024-05-14 11:43:02.867165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:35.821 [2024-05-14 11:43:02.867189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.080 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:36.080 #22 NEW cov: 12027 ft: 14778 corp: 21/45b lim: 5 exec/s: 22 rss: 73Mb L: 1/5 MS: 1 ShuffleBytes- 00:06:36.338 [2024-05-14 11:43:03.199108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.338 [2024-05-14 11:43:03.199155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.338 [2024-05-14 11:43:03.199285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.338 [2024-05-14 11:43:03.199305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.338 #23 NEW cov: 12027 ft: 15137 corp: 22/47b lim: 5 exec/s: 23 rss: 73Mb L: 2/5 MS: 1 ChangeByte- 00:06:36.338 [2024-05-14 11:43:03.239065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.338 [2024-05-14 11:43:03.239098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.338 [2024-05-14 11:43:03.239222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.338 [2024-05-14 11:43:03.239240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.338 #24 NEW cov: 12027 ft: 15151 corp: 23/49b lim: 5 exec/s: 24 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:06:36.338 [2024-05-14 11:43:03.279731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.338 [2024-05-14 11:43:03.279760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.338 [2024-05-14 11:43:03.279879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.338 [2024-05-14 11:43:03.279898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.339 [2024-05-14 11:43:03.280018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.339 [2024-05-14 11:43:03.280037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.339 [2024-05-14 11:43:03.280154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.339 [2024-05-14 11:43:03.280172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.339 #25 NEW cov: 12027 ft: 15177 corp: 24/53b lim: 5 exec/s: 25 rss: 73Mb L: 4/5 MS: 1 CopyPart- 00:06:36.339 [2024-05-14 11:43:03.329863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.339 [2024-05-14 11:43:03.329892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.339 [2024-05-14 11:43:03.330018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.339 [2024-05-14 11:43:03.330037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.339 [2024-05-14 11:43:03.330161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.339 [2024-05-14 11:43:03.330180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.339 [2024-05-14 11:43:03.330303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.339 [2024-05-14 11:43:03.330321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.339 #26 NEW cov: 12027 ft: 15196 corp: 25/57b lim: 5 exec/s: 26 rss: 73Mb L: 4/5 MS: 1 CopyPart- 00:06:36.339 [2024-05-14 11:43:03.369733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.339 [2024-05-14 11:43:03.369766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.339 [2024-05-14 11:43:03.369905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.339 [2024-05-14 11:43:03.369924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.339 [2024-05-14 11:43:03.370043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.339 [2024-05-14 11:43:03.370062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.339 #27 NEW cov: 12027 ft: 15217 corp: 26/60b lim: 5 exec/s: 27 rss: 73Mb L: 3/5 MS: 1 CopyPart- 00:06:36.339 [2024-05-14 11:43:03.410117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.339 [2024-05-14 11:43:03.410145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.339 [2024-05-14 11:43:03.410265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.339 [2024-05-14 11:43:03.410282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.339 [2024-05-14 11:43:03.410405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.339 [2024-05-14 11:43:03.410423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.339 [2024-05-14 11:43:03.410552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.339 [2024-05-14 11:43:03.410570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.597 #28 NEW cov: 12027 ft: 15280 corp: 27/64b lim: 5 exec/s: 28 rss: 73Mb L: 4/5 MS: 1 ShuffleBytes- 00:06:36.597 [2024-05-14 11:43:03.450254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.597 [2024-05-14 11:43:03.450283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.597 [2024-05-14 11:43:03.450417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.597 [2024-05-14 11:43:03.450434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.597 [2024-05-14 11:43:03.450554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.597 [2024-05-14 11:43:03.450572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.597 [2024-05-14 11:43:03.450712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.597 [2024-05-14 11:43:03.450730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.597 #29 NEW cov: 12027 ft: 15303 corp: 28/68b lim: 5 exec/s: 29 rss: 73Mb L: 4/5 MS: 1 CrossOver- 00:06:36.597 [2024-05-14 11:43:03.499898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.597 [2024-05-14 11:43:03.499930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.597 [2024-05-14 11:43:03.500049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.597 [2024-05-14 11:43:03.500068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.597 #30 NEW cov: 12027 ft: 15324 corp: 29/70b lim: 5 exec/s: 30 rss: 73Mb L: 2/5 MS: 1 CopyPart- 00:06:36.597 [2024-05-14 11:43:03.539753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.597 [2024-05-14 11:43:03.539782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.597 #31 NEW cov: 12027 ft: 15391 corp: 30/71b lim: 5 exec/s: 31 rss: 73Mb L: 1/5 MS: 1 ChangeByte- 00:06:36.597 [2024-05-14 11:43:03.580105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.597 [2024-05-14 11:43:03.580133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.597 [2024-05-14 11:43:03.580253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.597 [2024-05-14 11:43:03.580272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.597 #32 NEW cov: 12027 ft: 15394 corp: 31/73b lim: 5 exec/s: 32 rss: 73Mb L: 2/5 MS: 1 EraseBytes- 00:06:36.597 [2024-05-14 11:43:03.630539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.597 [2024-05-14 11:43:03.630568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.597 [2024-05-14 11:43:03.630698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.597 [2024-05-14 11:43:03.630716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.597 [2024-05-14 11:43:03.630836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.598 [2024-05-14 11:43:03.630852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.598 #33 NEW cov: 12027 ft: 15430 corp: 32/76b lim: 5 exec/s: 33 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:06:36.598 [2024-05-14 11:43:03.680442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.598 [2024-05-14 11:43:03.680471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.598 [2024-05-14 11:43:03.680595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.598 [2024-05-14 11:43:03.680613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.856 #34 NEW cov: 12027 ft: 15443 corp: 33/78b lim: 5 exec/s: 34 rss: 73Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:36.856 [2024-05-14 11:43:03.731066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.856 [2024-05-14 11:43:03.731098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.856 [2024-05-14 11:43:03.731219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.856 [2024-05-14 11:43:03.731236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.856 [2024-05-14 11:43:03.731355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.856 [2024-05-14 11:43:03.731386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.856 [2024-05-14 11:43:03.731499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.856 [2024-05-14 11:43:03.731518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.856 #35 NEW cov: 12027 ft: 15448 corp: 34/82b lim: 5 exec/s: 35 rss: 74Mb L: 4/5 MS: 1 ChangeByte- 00:06:36.856 [2024-05-14 11:43:03.781223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.856 [2024-05-14 11:43:03.781250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.856 [2024-05-14 11:43:03.781365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.856 [2024-05-14 11:43:03.781385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.856 [2024-05-14 11:43:03.781503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.856 [2024-05-14 11:43:03.781519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.856 [2024-05-14 11:43:03.781633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.856 [2024-05-14 11:43:03.781649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:36.856 #36 NEW cov: 12027 ft: 15456 corp: 35/86b lim: 5 exec/s: 36 rss: 74Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:06:36.856 [2024-05-14 11:43:03.830609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.856 [2024-05-14 11:43:03.830639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.856 #37 NEW cov: 12027 ft: 15464 corp: 36/87b lim: 5 exec/s: 37 rss: 74Mb L: 1/5 MS: 1 CopyPart- 00:06:36.856 [2024-05-14 11:43:03.870910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.856 [2024-05-14 11:43:03.870937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.856 [2024-05-14 11:43:03.871056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.856 [2024-05-14 11:43:03.871074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.856 #38 NEW cov: 12027 ft: 15473 corp: 37/89b lim: 5 exec/s: 38 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:06:36.856 [2024-05-14 11:43:03.921345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.856 [2024-05-14 11:43:03.921373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.856 [2024-05-14 11:43:03.921493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.856 [2024-05-14 11:43:03.921511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.856 [2024-05-14 11:43:03.921633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.856 [2024-05-14 11:43:03.921650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:36.856 #39 NEW cov: 12027 ft: 15490 corp: 38/92b lim: 5 exec/s: 39 rss: 74Mb L: 3/5 MS: 1 CopyPart- 00:06:37.115 [2024-05-14 11:43:03.961444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.115 [2024-05-14 11:43:03.961473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.115 [2024-05-14 11:43:03.961596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.115 [2024-05-14 11:43:03.961614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.115 [2024-05-14 11:43:03.961744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.115 [2024-05-14 11:43:03.961760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.115 #40 NEW cov: 12027 ft: 15500 corp: 39/95b lim: 5 exec/s: 20 rss: 74Mb L: 3/5 MS: 1 InsertByte- 00:06:37.115 #40 DONE cov: 12027 ft: 15500 corp: 39/95b lim: 5 exec/s: 20 rss: 74Mb 00:06:37.115 Done 40 runs in 2 second(s) 00:06:37.115 [2024-05-14 11:43:03.982015] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:37.115 11:43:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:37.115 [2024-05-14 11:43:04.146991] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:37.115 [2024-05-14 11:43:04.147056] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637388 ] 00:06:37.115 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.373 [2024-05-14 11:43:04.403770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.631 [2024-05-14 11:43:04.488828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.631 [2024-05-14 11:43:04.547618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.631 [2024-05-14 11:43:04.563585] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:37.631 [2024-05-14 11:43:04.564007] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:37.631 INFO: Running with entropic power schedule (0xFF, 100). 00:06:37.632 INFO: Seed: 2355572871 00:06:37.632 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:37.632 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:37.632 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:37.632 INFO: A corpus is not provided, starting from an empty corpus 00:06:37.632 #2 INITED exec/s: 0 rss: 63Mb 00:06:37.632 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:37.632 This may also happen if the target rejected all inputs we tried so far 00:06:37.632 [2024-05-14 11:43:04.640892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.632 [2024-05-14 11:43:04.640931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.892 NEW_FUNC[1/685]: 0x48eb90 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:37.893 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:37.893 #12 NEW cov: 11806 ft: 11807 corp: 2/11b lim: 40 exec/s: 0 rss: 70Mb L: 10/10 MS: 5 ChangeByte-ChangeBit-ChangeBinInt-CrossOver-CMP- DE: "\001\000\000\000\000\000\000p"- 00:06:37.893 [2024-05-14 11:43:04.981256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:37.893 [2024-05-14 11:43:04.981300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.153 #13 NEW cov: 11936 ft: 12592 corp: 3/21b lim: 40 exec/s: 0 rss: 70Mb L: 10/10 MS: 1 ChangeByte- 00:06:38.153 [2024-05-14 11:43:05.041277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000070 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.153 [2024-05-14 11:43:05.041308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.153 #16 NEW cov: 11942 ft: 12817 corp: 4/35b lim: 40 exec/s: 0 rss: 70Mb L: 14/14 MS: 3 CrossOver-ChangeByte-PersAutoDict- DE: "\001\000\000\000\000\000\000p"- 00:06:38.153 [2024-05-14 11:43:05.091451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0180 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.153 [2024-05-14 11:43:05.091481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.153 #17 NEW cov: 12027 ft: 13046 corp: 5/45b lim: 40 exec/s: 0 rss: 70Mb L: 10/14 MS: 1 ChangeBit- 00:06:38.153 [2024-05-14 11:43:05.141559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.153 [2024-05-14 11:43:05.141586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.153 #18 NEW cov: 12027 ft: 13135 corp: 6/60b lim: 40 exec/s: 0 rss: 70Mb L: 15/15 MS: 1 CopyPart- 00:06:38.153 [2024-05-14 11:43:05.191675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0180 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.153 [2024-05-14 11:43:05.191705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.153 #19 NEW cov: 12027 ft: 13234 corp: 7/73b lim: 40 exec/s: 0 rss: 70Mb L: 13/15 MS: 1 CrossOver- 00:06:38.153 [2024-05-14 11:43:05.241957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.153 [2024-05-14 11:43:05.241985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.412 #20 NEW cov: 12027 ft: 13288 corp: 8/83b lim: 40 exec/s: 0 rss: 70Mb L: 10/15 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000p"- 00:06:38.412 [2024-05-14 11:43:05.292011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000070 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.412 [2024-05-14 11:43:05.292039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.412 #21 NEW cov: 12027 ft: 13305 corp: 9/93b lim: 40 exec/s: 0 rss: 71Mb L: 10/15 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000p"- 00:06:38.412 [2024-05-14 11:43:05.342436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.412 [2024-05-14 11:43:05.342463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.412 [2024-05-14 11:43:05.342612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000070 cdw11:a1000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.412 [2024-05-14 11:43:05.342632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.412 #22 NEW cov: 12027 ft: 13650 corp: 10/111b lim: 40 exec/s: 0 rss: 71Mb L: 18/18 MS: 1 CopyPart- 00:06:38.413 [2024-05-14 11:43:05.402506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0180 cdw11:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.413 [2024-05-14 11:43:05.402535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.413 [2024-05-14 11:43:05.402684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a010000 cdw11:0000e700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.413 [2024-05-14 11:43:05.402703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.413 #23 NEW cov: 12027 ft: 13692 corp: 11/130b lim: 40 exec/s: 0 rss: 71Mb L: 19/19 MS: 1 CrossOver- 00:06:38.413 [2024-05-14 11:43:05.462817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.413 [2024-05-14 11:43:05.462851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.413 [2024-05-14 11:43:05.463018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:005dfdfd cdw11:fdfdfdfd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.413 [2024-05-14 11:43:05.463036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.413 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:38.413 #24 NEW cov: 12050 ft: 13701 corp: 12/152b lim: 40 exec/s: 0 rss: 71Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:06:38.672 [2024-05-14 11:43:05.523270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.672 [2024-05-14 11:43:05.523300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.672 [2024-05-14 11:43:05.523462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:005d0180 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.672 [2024-05-14 11:43:05.523483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.672 [2024-05-14 11:43:05.523643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e700700a cdw11:e70a0100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.672 [2024-05-14 11:43:05.523661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.672 #25 NEW cov: 12050 ft: 13965 corp: 13/177b lim: 40 exec/s: 0 rss: 71Mb L: 25/25 MS: 1 CrossOver- 00:06:38.672 [2024-05-14 11:43:05.573181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.672 [2024-05-14 11:43:05.573210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.672 [2024-05-14 11:43:05.573362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:005dfdfd cdw11:fdfdfdfd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.672 [2024-05-14 11:43:05.573386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.672 #26 NEW cov: 12050 ft: 13990 corp: 14/199b lim: 40 exec/s: 0 rss: 71Mb L: 22/25 MS: 1 ShuffleBytes- 00:06:38.672 [2024-05-14 11:43:05.633286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.672 [2024-05-14 11:43:05.633315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.672 [2024-05-14 11:43:05.633477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00007000 cdw11:00000070 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.672 [2024-05-14 11:43:05.633494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.672 #27 NEW cov: 12050 ft: 13997 corp: 15/217b lim: 40 exec/s: 27 rss: 71Mb L: 18/25 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000p"- 00:06:38.672 [2024-05-14 11:43:05.693285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.672 [2024-05-14 11:43:05.693315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.672 #28 NEW cov: 12050 ft: 14014 corp: 16/227b lim: 40 exec/s: 28 rss: 71Mb L: 10/25 MS: 1 ChangeBinInt- 00:06:38.672 [2024-05-14 11:43:05.743720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01ff0000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.672 [2024-05-14 11:43:05.743746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.672 [2024-05-14 11:43:05.743887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.672 [2024-05-14 11:43:05.743905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.982 #29 NEW cov: 12050 ft: 14045 corp: 17/246b lim: 40 exec/s: 29 rss: 72Mb L: 19/25 MS: 1 InsertByte- 00:06:38.982 [2024-05-14 11:43:05.803663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0180 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.982 [2024-05-14 11:43:05.803693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.982 #30 NEW cov: 12050 ft: 14106 corp: 18/259b lim: 40 exec/s: 30 rss: 72Mb L: 13/25 MS: 1 InsertRepeatedBytes- 00:06:38.982 [2024-05-14 11:43:05.854048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.982 [2024-05-14 11:43:05.854077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.982 [2024-05-14 11:43:05.854229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:005dfdff cdw11:843d544e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.982 [2024-05-14 11:43:05.854249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.982 #31 NEW cov: 12050 ft: 14175 corp: 19/281b lim: 40 exec/s: 31 rss: 72Mb L: 22/25 MS: 1 CMP- DE: "\377\204=TN\260\337$"- 00:06:38.982 [2024-05-14 11:43:05.904258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2c2c2c2c cdw11:2c010000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.982 [2024-05-14 11:43:05.904287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.982 [2024-05-14 11:43:05.904437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a320000 cdw11:00007000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.982 [2024-05-14 11:43:05.904455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.982 #34 NEW cov: 12050 ft: 14178 corp: 20/298b lim: 40 exec/s: 34 rss: 72Mb L: 17/25 MS: 3 InsertRepeatedBytes-InsertByte-CrossOver- 00:06:38.982 [2024-05-14 11:43:05.954434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0180 cdw11:000000e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.982 [2024-05-14 11:43:05.954464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.982 [2024-05-14 11:43:05.954617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:0a010000 cdw11:0000e700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.982 [2024-05-14 11:43:05.954637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.982 #35 NEW cov: 12050 ft: 14243 corp: 21/318b lim: 40 exec/s: 35 rss: 72Mb L: 20/25 MS: 1 InsertByte- 00:06:38.982 [2024-05-14 11:43:06.014881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.982 [2024-05-14 11:43:06.014910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.982 [2024-05-14 11:43:06.015056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000070 cdw11:a1000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.982 [2024-05-14 11:43:06.015076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.982 [2024-05-14 11:43:06.015227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:000a0a01 cdw11:80000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.982 [2024-05-14 11:43:06.015243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.982 #36 NEW cov: 12050 ft: 14273 corp: 22/346b lim: 40 exec/s: 36 rss: 72Mb L: 28/28 MS: 1 CrossOver- 00:06:39.254 [2024-05-14 11:43:06.075173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.254 [2024-05-14 11:43:06.075202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.254 [2024-05-14 11:43:06.075367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:005d0180 cdw11:00020000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.254 [2024-05-14 11:43:06.075390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.254 [2024-05-14 11:43:06.075537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:e700700a cdw11:e70a0100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.254 [2024-05-14 11:43:06.075555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.254 #37 NEW cov: 12050 ft: 14292 corp: 23/371b lim: 40 exec/s: 37 rss: 72Mb L: 25/28 MS: 1 ChangeBinInt- 00:06:39.254 [2024-05-14 11:43:06.134708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01700000 cdw11:00007000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.254 [2024-05-14 11:43:06.134736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.254 #38 NEW cov: 12050 ft: 14294 corp: 24/380b lim: 40 exec/s: 38 rss: 72Mb L: 9/28 MS: 1 EraseBytes- 00:06:39.254 [2024-05-14 11:43:06.185468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01ff0000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.254 [2024-05-14 11:43:06.185496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.254 [2024-05-14 11:43:06.185658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.254 [2024-05-14 11:43:06.185675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.254 [2024-05-14 11:43:06.185825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:70000000 cdw11:70000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.254 [2024-05-14 11:43:06.185845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.254 #39 NEW cov: 12050 ft: 14307 corp: 25/410b lim: 40 exec/s: 39 rss: 72Mb L: 30/30 MS: 1 CopyPart- 00:06:39.254 [2024-05-14 11:43:06.245063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70ae70a cdw11:01000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.254 [2024-05-14 11:43:06.245093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.254 #42 NEW cov: 12050 ft: 14316 corp: 26/419b lim: 40 exec/s: 42 rss: 72Mb L: 9/30 MS: 3 EraseBytes-CrossOver-CopyPart- 00:06:39.254 [2024-05-14 11:43:06.305566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00002900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.254 [2024-05-14 11:43:06.305597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.254 [2024-05-14 11:43:06.305751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000070 cdw11:a1000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.254 [2024-05-14 11:43:06.305769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.254 #43 NEW cov: 12050 ft: 14342 corp: 27/437b lim: 40 exec/s: 43 rss: 72Mb L: 18/30 MS: 1 ChangeByte- 00:06:39.513 [2024-05-14 11:43:06.355493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e74a0100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.513 [2024-05-14 11:43:06.355524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.513 #44 NEW cov: 12050 ft: 14427 corp: 28/452b lim: 40 exec/s: 44 rss: 72Mb L: 15/30 MS: 1 ChangeBit- 00:06:39.514 [2024-05-14 11:43:06.406084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0180 cdw11:00000a01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.514 [2024-05-14 11:43:06.406112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.514 [2024-05-14 11:43:06.406266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:80000000 cdw11:00e70070 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.514 [2024-05-14 11:43:06.406285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.514 [2024-05-14 11:43:06.406434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0a010000 cdw11:e700700a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.514 [2024-05-14 11:43:06.406452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.514 #45 NEW cov: 12050 ft: 14435 corp: 29/477b lim: 40 exec/s: 45 rss: 72Mb L: 25/30 MS: 1 CopyPart- 00:06:39.514 [2024-05-14 11:43:06.455712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:01000070 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.514 [2024-05-14 11:43:06.455740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.514 #46 NEW cov: 12050 ft: 14497 corp: 30/491b lim: 40 exec/s: 46 rss: 72Mb L: 14/30 MS: 1 ShuffleBytes- 00:06:39.514 [2024-05-14 11:43:06.505898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:01000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.514 [2024-05-14 11:43:06.505929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.514 #47 NEW cov: 12050 ft: 14587 corp: 31/505b lim: 40 exec/s: 47 rss: 72Mb L: 14/30 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000p"- 00:06:39.514 [2024-05-14 11:43:06.566286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:0000002e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.514 [2024-05-14 11:43:06.566317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.514 [2024-05-14 11:43:06.566477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e70a1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.514 [2024-05-14 11:43:06.566497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.514 #48 NEW cov: 12050 ft: 14597 corp: 32/526b lim: 40 exec/s: 48 rss: 72Mb L: 21/30 MS: 1 InsertRepeatedBytes- 00:06:39.773 [2024-05-14 11:43:06.616306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:e70a0300 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.773 [2024-05-14 11:43:06.616337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.773 #49 NEW cov: 12050 ft: 14609 corp: 33/536b lim: 40 exec/s: 24 rss: 73Mb L: 10/30 MS: 1 ChangeBit- 00:06:39.774 #49 DONE cov: 12050 ft: 14609 corp: 33/536b lim: 40 exec/s: 24 rss: 73Mb 00:06:39.774 ###### Recommended dictionary. ###### 00:06:39.774 "\001\000\000\000\000\000\000p" # Uses: 5 00:06:39.774 "\377\204=TN\260\337$" # Uses: 0 00:06:39.774 ###### End of recommended dictionary. ###### 00:06:39.774 Done 49 runs in 2 second(s) 00:06:39.774 [2024-05-14 11:43:06.636676] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:39.774 11:43:06 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:39.774 [2024-05-14 11:43:06.805802] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:39.774 [2024-05-14 11:43:06.805868] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637931 ] 00:06:39.774 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.032 [2024-05-14 11:43:07.060054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.291 [2024-05-14 11:43:07.152352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.291 [2024-05-14 11:43:07.211161] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.291 [2024-05-14 11:43:07.227113] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:40.291 [2024-05-14 11:43:07.227499] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:40.291 INFO: Running with entropic power schedule (0xFF, 100). 00:06:40.291 INFO: Seed: 722600990 00:06:40.291 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:40.291 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:40.291 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:40.291 INFO: A corpus is not provided, starting from an empty corpus 00:06:40.291 #2 INITED exec/s: 0 rss: 63Mb 00:06:40.291 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:40.291 This may also happen if the target rejected all inputs we tried so far 00:06:40.291 [2024-05-14 11:43:07.293398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.291 [2024-05-14 11:43:07.293426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.291 [2024-05-14 11:43:07.293486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.291 [2024-05-14 11:43:07.293499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.291 [2024-05-14 11:43:07.293558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.291 [2024-05-14 11:43:07.293571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.291 [2024-05-14 11:43:07.293629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.291 [2024-05-14 11:43:07.293642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.291 [2024-05-14 11:43:07.293714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.291 [2024-05-14 11:43:07.293727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.551 NEW_FUNC[1/686]: 0x490900 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:40.551 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:40.551 #22 NEW cov: 11818 ft: 11813 corp: 2/41b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 5 ChangeByte-CrossOver-CMP-ShuffleBytes-InsertRepeatedBytes- DE: "\001\000"- 00:06:40.551 [2024-05-14 11:43:07.614563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.551 [2024-05-14 11:43:07.614623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.551 [2024-05-14 11:43:07.614714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.551 [2024-05-14 11:43:07.614740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.551 [2024-05-14 11:43:07.614826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.551 [2024-05-14 11:43:07.614852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.551 [2024-05-14 11:43:07.614936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.551 [2024-05-14 11:43:07.614972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.551 [2024-05-14 11:43:07.615059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.551 [2024-05-14 11:43:07.615084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.810 #28 NEW cov: 11948 ft: 12439 corp: 3/81b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:06:40.810 [2024-05-14 11:43:07.664336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.810 [2024-05-14 11:43:07.664366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.810 [2024-05-14 11:43:07.664448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.810 [2024-05-14 11:43:07.664463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.810 [2024-05-14 11:43:07.664522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.810 [2024-05-14 11:43:07.664535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.810 [2024-05-14 11:43:07.664593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.810 [2024-05-14 11:43:07.664606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.810 [2024-05-14 11:43:07.664665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.810 [2024-05-14 11:43:07.664678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.810 #34 NEW cov: 11954 ft: 12865 corp: 4/121b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 CrossOver- 00:06:40.810 [2024-05-14 11:43:07.704366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.810 [2024-05-14 11:43:07.704396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.810 [2024-05-14 11:43:07.704474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.810 [2024-05-14 11:43:07.704488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.810 [2024-05-14 11:43:07.704547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.810 [2024-05-14 11:43:07.704561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.810 [2024-05-14 11:43:07.704620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.810 [2024-05-14 11:43:07.704633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.810 [2024-05-14 11:43:07.704693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:07ffffff cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.810 [2024-05-14 11:43:07.704706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.810 #35 NEW cov: 12039 ft: 13135 corp: 5/161b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:06:40.810 [2024-05-14 11:43:07.754497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.810 [2024-05-14 11:43:07.754524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.810 [2024-05-14 11:43:07.754603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.810 [2024-05-14 11:43:07.754618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.810 [2024-05-14 11:43:07.754686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.810 [2024-05-14 11:43:07.754700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.810 [2024-05-14 11:43:07.754758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.810 [2024-05-14 11:43:07.754771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.811 [2024-05-14 11:43:07.754830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.811 [2024-05-14 11:43:07.754844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.811 #36 NEW cov: 12039 ft: 13199 corp: 6/201b lim: 40 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:06:40.811 [2024-05-14 11:43:07.804048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:87878787 cdw11:87878787 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.811 [2024-05-14 11:43:07.804074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.811 #40 NEW cov: 12039 ft: 14240 corp: 7/216b lim: 40 exec/s: 0 rss: 71Mb L: 15/40 MS: 4 CopyPart-PersAutoDict-EraseBytes-InsertRepeatedBytes- DE: "\001\000"- 00:06:40.811 [2024-05-14 11:43:07.844791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.811 [2024-05-14 11:43:07.844817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.811 [2024-05-14 11:43:07.844877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffefff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.811 [2024-05-14 11:43:07.844891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.811 [2024-05-14 11:43:07.844949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.811 [2024-05-14 11:43:07.844962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.811 [2024-05-14 11:43:07.845022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.811 [2024-05-14 11:43:07.845035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.811 [2024-05-14 11:43:07.845092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.811 [2024-05-14 11:43:07.845105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.811 #41 NEW cov: 12039 ft: 14345 corp: 8/256b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 ChangeBit- 00:06:40.811 [2024-05-14 11:43:07.884415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.811 [2024-05-14 11:43:07.884440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.811 [2024-05-14 11:43:07.884500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:40.811 [2024-05-14 11:43:07.884514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.070 #45 NEW cov: 12039 ft: 14629 corp: 9/279b lim: 40 exec/s: 0 rss: 71Mb L: 23/40 MS: 4 CopyPart-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:06:41.070 [2024-05-14 11:43:07.924675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:07.924701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:07.924779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:07.924793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:07.924854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:07.924867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.070 #46 NEW cov: 12039 ft: 14849 corp: 10/305b lim: 40 exec/s: 0 rss: 71Mb L: 26/40 MS: 1 EraseBytes- 00:06:41.070 [2024-05-14 11:43:07.965114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:07.965139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:07.965197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffbf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:07.965211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:07.965270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:07.965283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:07.965342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:07.965355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:07.965414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:07.965427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.070 #47 NEW cov: 12039 ft: 14869 corp: 11/345b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 ChangeBit- 00:06:41.070 [2024-05-14 11:43:08.005068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.005096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:08.005158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.005172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:08.005232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.005246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:08.005307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.005321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.070 #48 NEW cov: 12039 ft: 14913 corp: 12/380b lim: 40 exec/s: 0 rss: 71Mb L: 35/40 MS: 1 CrossOver- 00:06:41.070 [2024-05-14 11:43:08.055196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.055221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:08.055280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.055294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:08.055351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.055364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:08.055428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.055442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.070 #49 NEW cov: 12039 ft: 14933 corp: 13/418b lim: 40 exec/s: 0 rss: 71Mb L: 38/40 MS: 1 CopyPart- 00:06:41.070 [2024-05-14 11:43:08.095501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffbf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.095526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:08.095602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffbf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.095617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:08.095688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.095701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:08.095760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.095776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:08.095833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.095846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.070 #50 NEW cov: 12039 ft: 14959 corp: 14/458b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 CopyPart- 00:06:41.070 [2024-05-14 11:43:08.145591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.145616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:08.145675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffbf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.145689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:08.145762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.145776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:08.145834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.145847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.070 [2024-05-14 11:43:08.145907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.070 [2024-05-14 11:43:08.145920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.330 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:41.330 #51 NEW cov: 12062 ft: 14986 corp: 15/498b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 ChangeBinInt- 00:06:41.330 [2024-05-14 11:43:08.185733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.185757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.185833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:0100ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.185847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.185907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.185920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.185977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.185991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.186048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.186065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.330 #52 NEW cov: 12062 ft: 14997 corp: 16/538b lim: 40 exec/s: 0 rss: 71Mb L: 40/40 MS: 1 PersAutoDict- DE: "\001\000"- 00:06:41.330 [2024-05-14 11:43:08.225737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.225762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.225818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.225832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.225888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.225902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.225963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:64646464 cdw11:64446464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.225977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.330 #53 NEW cov: 12062 ft: 15030 corp: 17/573b lim: 40 exec/s: 0 rss: 71Mb L: 35/40 MS: 1 ChangeBit- 00:06:41.330 [2024-05-14 11:43:08.265994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.266019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.266095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffbf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.266109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.266170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.266183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.266241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.266254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.266314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffff0100 cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.266327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.330 #54 NEW cov: 12062 ft: 15077 corp: 18/613b lim: 40 exec/s: 54 rss: 71Mb L: 40/40 MS: 1 PersAutoDict- DE: "\001\000"- 00:06:41.330 [2024-05-14 11:43:08.305616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff0affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.305640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.305718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.305736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.330 #56 NEW cov: 12062 ft: 15096 corp: 19/634b lim: 40 exec/s: 56 rss: 71Mb L: 21/40 MS: 2 CrossOver-CrossOver- 00:06:41.330 [2024-05-14 11:43:08.345599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:87878787 cdw11:87878787 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.345624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.330 #57 NEW cov: 12062 ft: 15109 corp: 20/646b lim: 40 exec/s: 57 rss: 71Mb L: 12/40 MS: 1 EraseBytes- 00:06:41.330 [2024-05-14 11:43:08.396337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.396362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.396422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.396436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.396511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.396525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.396581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.396594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.330 [2024-05-14 11:43:08.396650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:0007ffff cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.330 [2024-05-14 11:43:08.396664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.590 #63 NEW cov: 12062 ft: 15121 corp: 21/686b lim: 40 exec/s: 63 rss: 71Mb L: 40/40 MS: 1 CopyPart- 00:06:41.590 [2024-05-14 11:43:08.446452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff5a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.590 [2024-05-14 11:43:08.446477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.590 [2024-05-14 11:43:08.446536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffbf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.590 [2024-05-14 11:43:08.446550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.590 [2024-05-14 11:43:08.446608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.590 [2024-05-14 11:43:08.446621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.590 [2024-05-14 11:43:08.446679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.590 [2024-05-14 11:43:08.446692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.591 [2024-05-14 11:43:08.446748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.446764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.591 #64 NEW cov: 12062 ft: 15128 corp: 22/726b lim: 40 exec/s: 64 rss: 72Mb L: 40/40 MS: 1 ChangeByte- 00:06:41.591 [2024-05-14 11:43:08.496122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.496147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.591 [2024-05-14 11:43:08.496204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:6464646d cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.496218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.591 #65 NEW cov: 12062 ft: 15140 corp: 23/749b lim: 40 exec/s: 65 rss: 72Mb L: 23/40 MS: 1 ChangeBinInt- 00:06:41.591 [2024-05-14 11:43:08.536797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.536822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.591 [2024-05-14 11:43:08.536882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.536896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.591 [2024-05-14 11:43:08.536956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.536969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.591 [2024-05-14 11:43:08.537027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.537040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.591 [2024-05-14 11:43:08.537097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.537110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.591 #66 NEW cov: 12062 ft: 15149 corp: 24/789b lim: 40 exec/s: 66 rss: 72Mb L: 40/40 MS: 1 ShuffleBytes- 00:06:41.591 [2024-05-14 11:43:08.576214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:87878787 cdw11:87878787 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.576239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.591 #67 NEW cov: 12062 ft: 15150 corp: 25/804b lim: 40 exec/s: 67 rss: 72Mb L: 15/40 MS: 1 ShuffleBytes- 00:06:41.591 [2024-05-14 11:43:08.616505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.616530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.591 [2024-05-14 11:43:08.616591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.616605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.591 #68 NEW cov: 12062 ft: 15162 corp: 26/825b lim: 40 exec/s: 68 rss: 72Mb L: 21/40 MS: 1 EraseBytes- 00:06:41.591 [2024-05-14 11:43:08.657096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.657122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.591 [2024-05-14 11:43:08.657180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.657193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.591 [2024-05-14 11:43:08.657266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.657279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.591 [2024-05-14 11:43:08.657338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.657351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.591 [2024-05-14 11:43:08.657412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.591 [2024-05-14 11:43:08.657425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.591 #71 NEW cov: 12062 ft: 15189 corp: 27/865b lim: 40 exec/s: 71 rss: 72Mb L: 40/40 MS: 3 CrossOver-ChangeByte-InsertRepeatedBytes- 00:06:41.850 [2024-05-14 11:43:08.697253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.850 [2024-05-14 11:43:08.697278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.850 [2024-05-14 11:43:08.697351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.850 [2024-05-14 11:43:08.697365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.697428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.697443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.697500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff41ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.697513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.697569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffff01 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.697582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.851 #72 NEW cov: 12062 ft: 15200 corp: 28/905b lim: 40 exec/s: 72 rss: 72Mb L: 40/40 MS: 1 ChangeByte- 00:06:41.851 [2024-05-14 11:43:08.737361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.737390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.737465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.737479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.737536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.737549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.737608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.737621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.737678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.737691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.851 #73 NEW cov: 12062 ft: 15215 corp: 29/945b lim: 40 exec/s: 73 rss: 72Mb L: 40/40 MS: 1 ShuffleBytes- 00:06:41.851 [2024-05-14 11:43:08.777446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.777470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.777528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.777541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.777599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.777612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.777669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.777682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.777739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:28ffffff cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.777752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.851 #74 NEW cov: 12062 ft: 15229 corp: 30/985b lim: 40 exec/s: 74 rss: 72Mb L: 40/40 MS: 1 ChangeByte- 00:06:41.851 [2024-05-14 11:43:08.817646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.817671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.817729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.817742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.817799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.817816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.817870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.817884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.817941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.817955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.851 #75 NEW cov: 12062 ft: 15235 corp: 31/1025b lim: 40 exec/s: 75 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:06:41.851 [2024-05-14 11:43:08.857263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.857288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.857362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.857376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.851 #76 NEW cov: 12062 ft: 15251 corp: 32/1046b lim: 40 exec/s: 76 rss: 72Mb L: 21/40 MS: 1 ShuffleBytes- 00:06:41.851 [2024-05-14 11:43:08.907901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.907926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.907985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.907998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.908070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffbbff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.908084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.908141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ff010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.908155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:41.851 [2024-05-14 11:43:08.908211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:0007ffff cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:41.851 [2024-05-14 11:43:08.908225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:41.851 #77 NEW cov: 12062 ft: 15271 corp: 33/1086b lim: 40 exec/s: 77 rss: 72Mb L: 40/40 MS: 1 ChangeByte- 00:06:42.110 [2024-05-14 11:43:08.958068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:08.958095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:08.958149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffbf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:08.958165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:08.958223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:08.958236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:08.958294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:08.958307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:08.958365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffff0100 cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:08.958383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:42.110 #78 NEW cov: 12062 ft: 15317 corp: 34/1126b lim: 40 exec/s: 78 rss: 72Mb L: 40/40 MS: 1 PersAutoDict- DE: "\001\000"- 00:06:42.110 [2024-05-14 11:43:09.008192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.008218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:09.008277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffbf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.008290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:09.008345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.008359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:09.008415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.008428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:09.008483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.008496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:42.110 #79 NEW cov: 12062 ft: 15332 corp: 35/1166b lim: 40 exec/s: 79 rss: 72Mb L: 40/40 MS: 1 ShuffleBytes- 00:06:42.110 [2024-05-14 11:43:09.058023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:64646464 cdw11:2b646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.058048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:09.058105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:64646464 cdw11:64646464 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.058119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:09.058178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:64646464 cdw11:64640aff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.058195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.110 #80 NEW cov: 12062 ft: 15333 corp: 36/1190b lim: 40 exec/s: 80 rss: 72Mb L: 24/40 MS: 1 InsertByte- 00:06:42.110 [2024-05-14 11:43:09.098426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.098451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:09.098507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:03ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.098521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:09.098574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.098588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:09.098644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.098657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:09.098711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.098724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:42.110 #81 NEW cov: 12062 ft: 15347 corp: 37/1230b lim: 40 exec/s: 81 rss: 72Mb L: 40/40 MS: 1 ChangeBinInt- 00:06:42.110 [2024-05-14 11:43:09.138359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.138390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:09.138447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffef cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.138461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.110 [2024-05-14 11:43:09.138518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.110 [2024-05-14 11:43:09.138532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.111 [2024-05-14 11:43:09.138589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.111 [2024-05-14 11:43:09.138602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.111 #82 NEW cov: 12062 ft: 15352 corp: 38/1267b lim: 40 exec/s: 82 rss: 72Mb L: 37/40 MS: 1 CrossOver- 00:06:42.111 [2024-05-14 11:43:09.178670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.111 [2024-05-14 11:43:09.178696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.111 [2024-05-14 11:43:09.178752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:0100ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.111 [2024-05-14 11:43:09.178772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.111 [2024-05-14 11:43:09.178843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.111 [2024-05-14 11:43:09.178857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.111 [2024-05-14 11:43:09.178912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.111 [2024-05-14 11:43:09.178925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:42.111 [2024-05-14 11:43:09.178981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.111 [2024-05-14 11:43:09.178994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:42.369 #83 NEW cov: 12062 ft: 15357 corp: 39/1307b lim: 40 exec/s: 83 rss: 72Mb L: 40/40 MS: 1 CopyPart- 00:06:42.369 [2024-05-14 11:43:09.228485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffff0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.369 [2024-05-14 11:43:09.228510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.369 [2024-05-14 11:43:09.228573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.369 [2024-05-14 11:43:09.228587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.369 [2024-05-14 11:43:09.228647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:41ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.369 [2024-05-14 11:43:09.228660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:42.369 #84 NEW cov: 12062 ft: 15412 corp: 40/1338b lim: 40 exec/s: 84 rss: 73Mb L: 31/40 MS: 1 EraseBytes- 00:06:42.369 [2024-05-14 11:43:09.278447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffff7fff cdw11:ff0affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.369 [2024-05-14 11:43:09.278473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:42.369 [2024-05-14 11:43:09.278549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.369 [2024-05-14 11:43:09.278563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:42.369 #85 NEW cov: 12062 ft: 15424 corp: 41/1359b lim: 40 exec/s: 42 rss: 73Mb L: 21/40 MS: 1 ChangeBit- 00:06:42.369 #85 DONE cov: 12062 ft: 15424 corp: 41/1359b lim: 40 exec/s: 42 rss: 73Mb 00:06:42.369 ###### Recommended dictionary. ###### 00:06:42.370 "\001\000" # Uses: 5 00:06:42.370 ###### End of recommended dictionary. ###### 00:06:42.370 Done 85 runs in 2 second(s) 00:06:42.370 [2024-05-14 11:43:09.307269] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:42.370 11:43:09 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:42.629 [2024-05-14 11:43:09.475958] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:42.629 [2024-05-14 11:43:09.476030] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638432 ] 00:06:42.629 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.887 [2024-05-14 11:43:09.732339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.887 [2024-05-14 11:43:09.824233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.888 [2024-05-14 11:43:09.882974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.888 [2024-05-14 11:43:09.898936] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:42.888 [2024-05-14 11:43:09.899333] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:42.888 INFO: Running with entropic power schedule (0xFF, 100). 00:06:42.888 INFO: Seed: 3395600433 00:06:42.888 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:42.888 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:42.888 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:42.888 INFO: A corpus is not provided, starting from an empty corpus 00:06:42.888 #2 INITED exec/s: 0 rss: 63Mb 00:06:42.888 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:42.888 This may also happen if the target rejected all inputs we tried so far 00:06:42.888 [2024-05-14 11:43:09.944599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e8ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:42.888 [2024-05-14 11:43:09.944627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.455 NEW_FUNC[1/685]: 0x492670 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:43.455 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:43.455 #17 NEW cov: 11808 ft: 11810 corp: 2/12b lim: 40 exec/s: 0 rss: 70Mb L: 11/11 MS: 5 InsertByte-ChangeBinInt-ChangeBit-CrossOver-InsertRepeatedBytes- 00:06:43.455 [2024-05-14 11:43:10.276805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.455 [2024-05-14 11:43:10.276858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.455 [2024-05-14 11:43:10.276998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.455 [2024-05-14 11:43:10.277024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.455 [2024-05-14 11:43:10.277164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.455 [2024-05-14 11:43:10.277187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.455 NEW_FUNC[1/1]: 0x1d18460 in thread_update_stats /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:924 00:06:43.455 #23 NEW cov: 11946 ft: 13198 corp: 3/37b lim: 40 exec/s: 0 rss: 70Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:06:43.455 [2024-05-14 11:43:10.326241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e8ffffff cdw11:fdffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.455 [2024-05-14 11:43:10.326273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.455 #24 NEW cov: 11952 ft: 13444 corp: 4/48b lim: 40 exec/s: 0 rss: 70Mb L: 11/25 MS: 1 ChangeBit- 00:06:43.455 [2024-05-14 11:43:10.386985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.455 [2024-05-14 11:43:10.387013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.455 [2024-05-14 11:43:10.387142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41412c41 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.455 [2024-05-14 11:43:10.387160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.455 [2024-05-14 11:43:10.387285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.455 [2024-05-14 11:43:10.387303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.455 #25 NEW cov: 12037 ft: 13827 corp: 5/73b lim: 40 exec/s: 0 rss: 70Mb L: 25/25 MS: 1 ChangeByte- 00:06:43.455 [2024-05-14 11:43:10.437090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.455 [2024-05-14 11:43:10.437118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.455 [2024-05-14 11:43:10.437253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41412c41 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.455 [2024-05-14 11:43:10.437272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.455 [2024-05-14 11:43:10.437398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:75414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.455 [2024-05-14 11:43:10.437415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.455 #26 NEW cov: 12037 ft: 13913 corp: 6/98b lim: 40 exec/s: 0 rss: 70Mb L: 25/25 MS: 1 ChangeByte- 00:06:43.455 [2024-05-14 11:43:10.486710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e8ff00fa cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.455 [2024-05-14 11:43:10.486737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.455 #27 NEW cov: 12037 ft: 14118 corp: 7/109b lim: 40 exec/s: 0 rss: 70Mb L: 11/25 MS: 1 ChangeBinInt- 00:06:43.455 [2024-05-14 11:43:10.526800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e8ffffff cdw11:fdffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.455 [2024-05-14 11:43:10.526828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.714 #28 NEW cov: 12037 ft: 14187 corp: 8/118b lim: 40 exec/s: 0 rss: 71Mb L: 9/25 MS: 1 EraseBytes- 00:06:43.714 [2024-05-14 11:43:10.577071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.714 [2024-05-14 11:43:10.577100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.714 [2024-05-14 11:43:10.577230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41412c41 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.714 [2024-05-14 11:43:10.577248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.714 [2024-05-14 11:43:10.577387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.714 [2024-05-14 11:43:10.577406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.714 #29 NEW cov: 12037 ft: 14260 corp: 9/143b lim: 40 exec/s: 0 rss: 71Mb L: 25/25 MS: 1 ShuffleBytes- 00:06:43.714 [2024-05-14 11:43:10.627183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e8ffffff cdw11:fdffff3b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.714 [2024-05-14 11:43:10.627211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.714 #30 NEW cov: 12037 ft: 14289 corp: 10/152b lim: 40 exec/s: 0 rss: 71Mb L: 9/25 MS: 1 ChangeByte- 00:06:43.714 [2024-05-14 11:43:10.667254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e8ff00fa cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.714 [2024-05-14 11:43:10.667282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.714 #31 NEW cov: 12037 ft: 14332 corp: 11/163b lim: 40 exec/s: 0 rss: 71Mb L: 11/25 MS: 1 ChangeBinInt- 00:06:43.714 [2024-05-14 11:43:10.707788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.714 [2024-05-14 11:43:10.707816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.714 [2024-05-14 11:43:10.707937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.714 [2024-05-14 11:43:10.707957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.714 [2024-05-14 11:43:10.708093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:4141412c cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.714 [2024-05-14 11:43:10.708113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.714 #32 NEW cov: 12037 ft: 14432 corp: 12/193b lim: 40 exec/s: 0 rss: 71Mb L: 30/30 MS: 1 CopyPart- 00:06:43.714 [2024-05-14 11:43:10.757424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffe800fa cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.714 [2024-05-14 11:43:10.757452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.714 #33 NEW cov: 12037 ft: 14438 corp: 13/204b lim: 40 exec/s: 0 rss: 71Mb L: 11/30 MS: 1 ShuffleBytes- 00:06:43.973 [2024-05-14 11:43:10.817661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e8ffffff cdw11:fdfdfdfd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.973 [2024-05-14 11:43:10.817690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.973 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:43.973 #34 NEW cov: 12060 ft: 14472 corp: 14/218b lim: 40 exec/s: 0 rss: 71Mb L: 14/30 MS: 1 InsertRepeatedBytes- 00:06:43.973 [2024-05-14 11:43:10.857385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e8ffffff cdw11:fdffff3b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.973 [2024-05-14 11:43:10.857414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.973 #35 NEW cov: 12060 ft: 14511 corp: 15/227b lim: 40 exec/s: 0 rss: 71Mb L: 9/30 MS: 1 ShuffleBytes- 00:06:43.973 [2024-05-14 11:43:10.917953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e8ffffff cdw11:fdffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.973 [2024-05-14 11:43:10.917980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.973 #36 NEW cov: 12060 ft: 14544 corp: 16/240b lim: 40 exec/s: 36 rss: 71Mb L: 13/30 MS: 1 CopyPart- 00:06:43.973 [2024-05-14 11:43:10.968745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a414141 cdw11:a9414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.973 [2024-05-14 11:43:10.968773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.973 [2024-05-14 11:43:10.968890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41412c41 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.973 [2024-05-14 11:43:10.968907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.973 [2024-05-14 11:43:10.969031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:75414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.973 [2024-05-14 11:43:10.969051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.973 #37 NEW cov: 12060 ft: 14564 corp: 17/265b lim: 40 exec/s: 37 rss: 72Mb L: 25/30 MS: 1 ChangeByte- 00:06:43.973 [2024-05-14 11:43:11.018875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.973 [2024-05-14 11:43:11.018903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:43.973 [2024-05-14 11:43:11.019037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.973 [2024-05-14 11:43:11.019054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:43.973 [2024-05-14 11:43:11.019182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.973 [2024-05-14 11:43:11.019200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:43.973 [2024-05-14 11:43:11.019326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.973 [2024-05-14 11:43:11.019344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:43.973 #38 NEW cov: 12060 ft: 14902 corp: 18/300b lim: 40 exec/s: 38 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:06:43.973 [2024-05-14 11:43:11.058066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e8ff3bff cdw11:fdffff3b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:43.973 [2024-05-14 11:43:11.058093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.233 #39 NEW cov: 12060 ft: 14915 corp: 19/309b lim: 40 exec/s: 39 rss: 72Mb L: 9/35 MS: 1 ChangeByte- 00:06:44.233 [2024-05-14 11:43:11.098615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e8ffffff cdw11:fdfdfd00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.233 [2024-05-14 11:43:11.098642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.233 [2024-05-14 11:43:11.098781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:fdffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.233 [2024-05-14 11:43:11.098799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.233 #40 NEW cov: 12060 ft: 15116 corp: 20/328b lim: 40 exec/s: 40 rss: 72Mb L: 19/35 MS: 1 InsertRepeatedBytes- 00:06:44.233 [2024-05-14 11:43:11.148690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.233 [2024-05-14 11:43:11.148718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.233 [2024-05-14 11:43:11.148861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41412c41 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.233 [2024-05-14 11:43:11.148880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.233 [2024-05-14 11:43:11.149009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.233 [2024-05-14 11:43:11.149028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.233 #41 NEW cov: 12060 ft: 15133 corp: 21/353b lim: 40 exec/s: 41 rss: 72Mb L: 25/35 MS: 1 ChangeByte- 00:06:44.233 [2024-05-14 11:43:11.188298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffe800fa cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.233 [2024-05-14 11:43:11.188325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.233 #42 NEW cov: 12060 ft: 15157 corp: 22/364b lim: 40 exec/s: 42 rss: 72Mb L: 11/35 MS: 1 ChangeBinInt- 00:06:44.233 [2024-05-14 11:43:11.238861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e8ffffff cdw11:fdfdfdfd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.233 [2024-05-14 11:43:11.238889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.233 #43 NEW cov: 12060 ft: 15181 corp: 23/378b lim: 40 exec/s: 43 rss: 72Mb L: 14/35 MS: 1 ShuffleBytes- 00:06:44.233 [2024-05-14 11:43:11.279332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.233 [2024-05-14 11:43:11.279359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.233 [2024-05-14 11:43:11.279497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.233 [2024-05-14 11:43:11.279514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.233 [2024-05-14 11:43:11.279651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.233 [2024-05-14 11:43:11.279669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.233 #44 NEW cov: 12060 ft: 15214 corp: 24/403b lim: 40 exec/s: 44 rss: 72Mb L: 25/35 MS: 1 CopyPart- 00:06:44.492 [2024-05-14 11:43:11.329401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7b7be8ff cdw11:fffffdff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.492 [2024-05-14 11:43:11.329439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.492 [2024-05-14 11:43:11.329581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:fffffd7b cdw11:7b7b7b31 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.492 [2024-05-14 11:43:11.329601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.492 #47 NEW cov: 12060 ft: 15220 corp: 25/423b lim: 40 exec/s: 47 rss: 72Mb L: 20/35 MS: 3 ChangeByte-InsertRepeatedBytes-CrossOver- 00:06:44.492 [2024-05-14 11:43:11.368802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e8ffffff cdw11:fd31ff3b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.492 [2024-05-14 11:43:11.368829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.492 #48 NEW cov: 12060 ft: 15240 corp: 26/432b lim: 40 exec/s: 48 rss: 72Mb L: 9/35 MS: 1 ChangeByte- 00:06:44.492 [2024-05-14 11:43:11.418937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffe800fa cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.492 [2024-05-14 11:43:11.418962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.492 #49 NEW cov: 12060 ft: 15253 corp: 27/443b lim: 40 exec/s: 49 rss: 72Mb L: 11/35 MS: 1 ChangeBit- 00:06:44.492 [2024-05-14 11:43:11.469548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:eeffffff cdw11:fdfdfdfd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.492 [2024-05-14 11:43:11.469575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.492 #55 NEW cov: 12060 ft: 15259 corp: 28/457b lim: 40 exec/s: 55 rss: 72Mb L: 14/35 MS: 1 ChangeBinInt- 00:06:44.492 [2024-05-14 11:43:11.520205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.492 [2024-05-14 11:43:11.520231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.492 [2024-05-14 11:43:11.520370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.492 [2024-05-14 11:43:11.520393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.492 [2024-05-14 11:43:11.520523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:4a414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.492 [2024-05-14 11:43:11.520540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.492 #56 NEW cov: 12060 ft: 15269 corp: 29/482b lim: 40 exec/s: 56 rss: 73Mb L: 25/35 MS: 1 ChangeBinInt- 00:06:44.492 [2024-05-14 11:43:11.579896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e8ff7314 cdw11:faa2573d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.492 [2024-05-14 11:43:11.579924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.751 #59 NEW cov: 12060 ft: 15309 corp: 30/496b lim: 40 exec/s: 59 rss: 73Mb L: 14/35 MS: 3 EraseBytes-ChangeBit-CMP- DE: "s\024\372\242W=\205\000"- 00:06:44.751 [2024-05-14 11:43:11.640613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffff0a cdw11:414141a9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.751 [2024-05-14 11:43:11.640644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.751 [2024-05-14 11:43:11.640784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.751 [2024-05-14 11:43:11.640804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.751 [2024-05-14 11:43:11.640938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:412c4175 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.751 [2024-05-14 11:43:11.640958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.751 #60 NEW cov: 12060 ft: 15346 corp: 31/524b lim: 40 exec/s: 60 rss: 73Mb L: 28/35 MS: 1 InsertRepeatedBytes- 00:06:44.751 [2024-05-14 11:43:11.700753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.751 [2024-05-14 11:43:11.700781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.751 [2024-05-14 11:43:11.700927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41412c41 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.751 [2024-05-14 11:43:11.700944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.751 [2024-05-14 11:43:11.701078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.751 [2024-05-14 11:43:11.701098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:44.751 #61 NEW cov: 12060 ft: 15360 corp: 32/549b lim: 40 exec/s: 61 rss: 73Mb L: 25/35 MS: 1 ShuffleBytes- 00:06:44.751 [2024-05-14 11:43:11.739989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffe8ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.751 [2024-05-14 11:43:11.740016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.751 #62 NEW cov: 12060 ft: 15371 corp: 33/560b lim: 40 exec/s: 62 rss: 73Mb L: 11/35 MS: 1 ShuffleBytes- 00:06:44.751 [2024-05-14 11:43:11.780476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e8ff002f cdw11:faffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.751 [2024-05-14 11:43:11.780503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.751 #63 NEW cov: 12060 ft: 15411 corp: 34/572b lim: 40 exec/s: 63 rss: 73Mb L: 12/35 MS: 1 InsertByte- 00:06:44.751 [2024-05-14 11:43:11.820866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:410a4141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.751 [2024-05-14 11:43:11.820893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:44.751 [2024-05-14 11:43:11.821020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.751 [2024-05-14 11:43:11.821039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:44.751 [2024-05-14 11:43:11.821169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:4141412c cdw11:41754141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.751 [2024-05-14 11:43:11.821186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.010 #64 NEW cov: 12060 ft: 15441 corp: 35/602b lim: 40 exec/s: 64 rss: 73Mb L: 30/35 MS: 1 CrossOver- 00:06:45.010 [2024-05-14 11:43:11.860603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a414141 cdw11:a9414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.010 [2024-05-14 11:43:11.860630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.010 [2024-05-14 11:43:11.860749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:41412c41 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.010 [2024-05-14 11:43:11.860768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.010 #65 NEW cov: 12060 ft: 15454 corp: 36/620b lim: 40 exec/s: 65 rss: 73Mb L: 18/35 MS: 1 EraseBytes- 00:06:45.010 [2024-05-14 11:43:11.901300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ff414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.010 [2024-05-14 11:43:11.901327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.010 [2024-05-14 11:43:11.901471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:41414141 cdw11:4141412c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.010 [2024-05-14 11:43:11.901490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.010 [2024-05-14 11:43:11.901624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:41414141 cdw11:41414141 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.010 [2024-05-14 11:43:11.901641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.010 #66 NEW cov: 12060 ft: 15469 corp: 37/645b lim: 40 exec/s: 66 rss: 73Mb L: 25/35 MS: 1 ShuffleBytes- 00:06:45.010 [2024-05-14 11:43:11.940413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fffaffe8 cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.010 [2024-05-14 11:43:11.940440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.010 #67 NEW cov: 12060 ft: 15490 corp: 38/656b lim: 40 exec/s: 33 rss: 73Mb L: 11/35 MS: 1 ShuffleBytes- 00:06:45.010 #67 DONE cov: 12060 ft: 15490 corp: 38/656b lim: 40 exec/s: 33 rss: 73Mb 00:06:45.010 ###### Recommended dictionary. ###### 00:06:45.010 "s\024\372\242W=\205\000" # Uses: 0 00:06:45.010 ###### End of recommended dictionary. ###### 00:06:45.010 Done 67 runs in 2 second(s) 00:06:45.010 [2024-05-14 11:43:11.960972] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4413 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:45.010 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:45.270 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:45.270 11:43:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:06:45.270 [2024-05-14 11:43:12.125499] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:45.270 [2024-05-14 11:43:12.125582] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638767 ] 00:06:45.270 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.270 [2024-05-14 11:43:12.309961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.527 [2024-05-14 11:43:12.376446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.527 [2024-05-14 11:43:12.435235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.527 [2024-05-14 11:43:12.451187] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:45.528 [2024-05-14 11:43:12.451600] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:06:45.528 INFO: Running with entropic power schedule (0xFF, 100). 00:06:45.528 INFO: Seed: 1652638490 00:06:45.528 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:45.528 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:45.528 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:45.528 INFO: A corpus is not provided, starting from an empty corpus 00:06:45.528 #2 INITED exec/s: 0 rss: 64Mb 00:06:45.528 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:45.528 This may also happen if the target rejected all inputs we tried so far 00:06:45.528 [2024-05-14 11:43:12.517488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:cd0aff84 cdw11:3d582e1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.528 [2024-05-14 11:43:12.517523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.786 NEW_FUNC[1/685]: 0x494230 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:06:45.786 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:45.786 #7 NEW cov: 11803 ft: 11805 corp: 2/11b lim: 40 exec/s: 0 rss: 70Mb L: 10/10 MS: 5 InsertByte-ChangeByte-ChangeBit-ChangeByte-CMP- DE: "\377\204=X.\034\273\262"- 00:06:45.786 [2024-05-14 11:43:12.848806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.786 [2024-05-14 11:43:12.848855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:45.786 [2024-05-14 11:43:12.849002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.786 [2024-05-14 11:43:12.849027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:45.786 [2024-05-14 11:43:12.849164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:45.786 [2024-05-14 11:43:12.849189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:45.786 #11 NEW cov: 11934 ft: 12777 corp: 3/38b lim: 40 exec/s: 0 rss: 70Mb L: 27/27 MS: 4 ShuffleBytes-CrossOver-CrossOver-InsertRepeatedBytes- 00:06:46.044 [2024-05-14 11:43:12.888992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:12.889021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.044 [2024-05-14 11:43:12.889148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:12.889165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.044 [2024-05-14 11:43:12.889295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:12.889314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.044 [2024-05-14 11:43:12.889449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:12.889467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.044 #15 NEW cov: 11940 ft: 13465 corp: 4/75b lim: 40 exec/s: 0 rss: 70Mb L: 37/37 MS: 4 CopyPart-CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:06:46.044 [2024-05-14 11:43:12.929181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:12.929209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.044 [2024-05-14 11:43:12.929338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00850000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:12.929357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.044 [2024-05-14 11:43:12.929486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:12.929503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.044 [2024-05-14 11:43:12.929622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:12.929643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.044 #21 NEW cov: 12025 ft: 13862 corp: 5/113b lim: 40 exec/s: 0 rss: 70Mb L: 38/38 MS: 1 InsertByte- 00:06:46.044 [2024-05-14 11:43:12.979056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:12.979082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.044 [2024-05-14 11:43:12.979216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:12.979233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.044 [2024-05-14 11:43:12.979365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:12.979385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.044 #23 NEW cov: 12025 ft: 14015 corp: 6/144b lim: 40 exec/s: 0 rss: 70Mb L: 31/38 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:46.044 [2024-05-14 11:43:13.018912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:cd0aff84 cdw11:3dff843d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:13.018940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.044 [2024-05-14 11:43:13.019061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:582e1cbb cdw11:b2582e1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:13.019079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.044 #24 NEW cov: 12025 ft: 14302 corp: 7/162b lim: 40 exec/s: 0 rss: 70Mb L: 18/38 MS: 1 CopyPart- 00:06:46.044 [2024-05-14 11:43:13.069314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:13.069342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.044 [2024-05-14 11:43:13.069473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:13.069492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.044 [2024-05-14 11:43:13.069612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:13.069632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.044 #25 NEW cov: 12025 ft: 14359 corp: 8/193b lim: 40 exec/s: 0 rss: 70Mb L: 31/38 MS: 1 CopyPart- 00:06:46.044 [2024-05-14 11:43:13.118986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a510a51 cdw11:faff19fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.044 [2024-05-14 11:43:13.119013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.303 #29 NEW cov: 12025 ft: 14381 corp: 9/203b lim: 40 exec/s: 0 rss: 70Mb L: 10/38 MS: 4 CMP-InsertByte-InsertByte-CopyPart- DE: "\377\031"- 00:06:46.303 [2024-05-14 11:43:13.159423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:cd0aff84 cdw11:35ff843d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.303 [2024-05-14 11:43:13.159452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.303 [2024-05-14 11:43:13.159582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:582e1cbb cdw11:b2582e1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.303 [2024-05-14 11:43:13.159601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.303 #30 NEW cov: 12025 ft: 14402 corp: 10/221b lim: 40 exec/s: 0 rss: 71Mb L: 18/38 MS: 1 ChangeBit- 00:06:46.303 [2024-05-14 11:43:13.209772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.303 [2024-05-14 11:43:13.209799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.303 [2024-05-14 11:43:13.209935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.303 [2024-05-14 11:43:13.209953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.303 [2024-05-14 11:43:13.210087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.303 [2024-05-14 11:43:13.210104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.303 #31 NEW cov: 12025 ft: 14466 corp: 11/252b lim: 40 exec/s: 0 rss: 71Mb L: 31/38 MS: 1 ShuffleBytes- 00:06:46.303 [2024-05-14 11:43:13.260220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.303 [2024-05-14 11:43:13.260248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.303 [2024-05-14 11:43:13.260387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.303 [2024-05-14 11:43:13.260404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.303 [2024-05-14 11:43:13.260527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:7878780a cdw11:ff843d58 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.303 [2024-05-14 11:43:13.260545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.303 [2024-05-14 11:43:13.260670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:2e1cbb78 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.303 [2024-05-14 11:43:13.260686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.303 #32 NEW cov: 12025 ft: 14494 corp: 12/291b lim: 40 exec/s: 0 rss: 71Mb L: 39/39 MS: 1 CrossOver- 00:06:46.303 [2024-05-14 11:43:13.300021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a787888 cdw11:8c787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.303 [2024-05-14 11:43:13.300049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.303 [2024-05-14 11:43:13.300179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.303 [2024-05-14 11:43:13.300197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.303 [2024-05-14 11:43:13.300326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.303 [2024-05-14 11:43:13.300347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.303 #33 NEW cov: 12025 ft: 14518 corp: 13/322b lim: 40 exec/s: 0 rss: 71Mb L: 31/39 MS: 1 ChangeBinInt- 00:06:46.303 [2024-05-14 11:43:13.339753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00007fed cdw11:c80e54f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.303 [2024-05-14 11:43:13.339780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.303 #34 NEW cov: 12025 ft: 14544 corp: 14/332b lim: 40 exec/s: 0 rss: 71Mb L: 10/39 MS: 1 CMP- DE: "\000\000\177\355\310\016T\361"- 00:06:46.303 [2024-05-14 11:43:13.389820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a510a51 cdw11:baff19fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.303 [2024-05-14 11:43:13.389847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.562 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:46.562 #35 NEW cov: 12048 ft: 14584 corp: 15/342b lim: 40 exec/s: 0 rss: 71Mb L: 10/39 MS: 1 ChangeBit- 00:06:46.562 [2024-05-14 11:43:13.439973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00007fed cdw11:c80e54f3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.562 [2024-05-14 11:43:13.440002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.562 #36 NEW cov: 12048 ft: 14638 corp: 16/352b lim: 40 exec/s: 0 rss: 71Mb L: 10/39 MS: 1 ChangeBit- 00:06:46.562 [2024-05-14 11:43:13.490559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.562 [2024-05-14 11:43:13.490587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.562 [2024-05-14 11:43:13.490717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000085 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.562 [2024-05-14 11:43:13.490735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.562 [2024-05-14 11:43:13.490856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.562 [2024-05-14 11:43:13.490874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.562 [2024-05-14 11:43:13.490994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.562 [2024-05-14 11:43:13.491011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.562 [2024-05-14 11:43:13.491135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.562 [2024-05-14 11:43:13.491154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:46.562 #37 NEW cov: 12048 ft: 14757 corp: 17/392b lim: 40 exec/s: 37 rss: 71Mb L: 40/40 MS: 1 CopyPart- 00:06:46.562 [2024-05-14 11:43:13.540244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.562 [2024-05-14 11:43:13.540272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.562 #38 NEW cov: 12048 ft: 14772 corp: 18/402b lim: 40 exec/s: 38 rss: 71Mb L: 10/40 MS: 1 InsertRepeatedBytes- 00:06:46.562 [2024-05-14 11:43:13.580425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a510a51 cdw11:fafa19fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.562 [2024-05-14 11:43:13.580454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.562 #39 NEW cov: 12048 ft: 14802 corp: 19/412b lim: 40 exec/s: 39 rss: 71Mb L: 10/40 MS: 1 ChangeBinInt- 00:06:46.562 [2024-05-14 11:43:13.621075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a787878 cdw11:78ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.562 [2024-05-14 11:43:13.621102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.562 [2024-05-14 11:43:13.621228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.562 [2024-05-14 11:43:13.621246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.562 [2024-05-14 11:43:13.621384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.562 [2024-05-14 11:43:13.621400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.562 [2024-05-14 11:43:13.621522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.562 [2024-05-14 11:43:13.621538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.562 #40 NEW cov: 12048 ft: 14811 corp: 20/447b lim: 40 exec/s: 40 rss: 71Mb L: 35/40 MS: 1 InsertRepeatedBytes- 00:06:46.821 [2024-05-14 11:43:13.671281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a787878 cdw11:78ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.821 [2024-05-14 11:43:13.671311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.821 [2024-05-14 11:43:13.671443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.821 [2024-05-14 11:43:13.671463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.821 [2024-05-14 11:43:13.671589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.821 [2024-05-14 11:43:13.671608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.821 [2024-05-14 11:43:13.671732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.821 [2024-05-14 11:43:13.671749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:46.821 #41 NEW cov: 12048 ft: 14828 corp: 21/482b lim: 40 exec/s: 41 rss: 71Mb L: 35/40 MS: 1 ShuffleBytes- 00:06:46.821 [2024-05-14 11:43:13.720800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a510a51 cdw11:faff19fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.821 [2024-05-14 11:43:13.720829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.821 #42 NEW cov: 12048 ft: 14832 corp: 22/492b lim: 40 exec/s: 42 rss: 71Mb L: 10/40 MS: 1 ChangeBinInt- 00:06:46.821 [2024-05-14 11:43:13.760751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.821 [2024-05-14 11:43:13.760783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.821 [2024-05-14 11:43:13.760918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.821 [2024-05-14 11:43:13.760935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.821 #43 NEW cov: 12048 ft: 14947 corp: 23/513b lim: 40 exec/s: 43 rss: 72Mb L: 21/40 MS: 1 EraseBytes- 00:06:46.821 [2024-05-14 11:43:13.810863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.821 [2024-05-14 11:43:13.810892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.821 #44 NEW cov: 12048 ft: 14991 corp: 24/524b lim: 40 exec/s: 44 rss: 72Mb L: 11/40 MS: 1 InsertByte- 00:06:46.821 [2024-05-14 11:43:13.861549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.821 [2024-05-14 11:43:13.861577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:46.821 [2024-05-14 11:43:13.861719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.821 [2024-05-14 11:43:13.861739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:46.821 [2024-05-14 11:43:13.861875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff27ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:46.821 [2024-05-14 11:43:13.861892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:46.821 #45 NEW cov: 12048 ft: 14999 corp: 25/551b lim: 40 exec/s: 45 rss: 72Mb L: 27/40 MS: 1 ChangeByte- 00:06:47.080 [2024-05-14 11:43:13.911475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:cd0aff58 cdw11:843d3584 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.080 [2024-05-14 11:43:13.911505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.080 [2024-05-14 11:43:13.911633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff2e1cbb cdw11:b2582e1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.080 [2024-05-14 11:43:13.911650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.080 #46 NEW cov: 12048 ft: 15020 corp: 26/569b lim: 40 exec/s: 46 rss: 72Mb L: 18/40 MS: 1 ShuffleBytes- 00:06:47.080 [2024-05-14 11:43:13.961454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a510a51 cdw11:51fafaff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.080 [2024-05-14 11:43:13.961482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.080 #47 NEW cov: 12048 ft: 15027 corp: 27/581b lim: 40 exec/s: 47 rss: 72Mb L: 12/40 MS: 1 CopyPart- 00:06:47.080 [2024-05-14 11:43:14.002174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.080 [2024-05-14 11:43:14.002202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.080 [2024-05-14 11:43:14.002324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00850000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.080 [2024-05-14 11:43:14.002345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.080 [2024-05-14 11:43:14.002477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.080 [2024-05-14 11:43:14.002494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.081 [2024-05-14 11:43:14.002620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff843d58 cdw11:2e1cbbb2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.081 [2024-05-14 11:43:14.002637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.081 #48 NEW cov: 12048 ft: 15045 corp: 28/619b lim: 40 exec/s: 48 rss: 72Mb L: 38/40 MS: 1 PersAutoDict- DE: "\377\204=X.\034\273\262"- 00:06:47.081 [2024-05-14 11:43:14.042122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.081 [2024-05-14 11:43:14.042150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.081 [2024-05-14 11:43:14.042266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00850000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.081 [2024-05-14 11:43:14.042285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.081 [2024-05-14 11:43:14.042418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.081 [2024-05-14 11:43:14.042436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.081 [2024-05-14 11:43:14.042582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff843058 cdw11:2e1cbbb2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.081 [2024-05-14 11:43:14.042599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.081 #49 NEW cov: 12048 ft: 15079 corp: 29/657b lim: 40 exec/s: 49 rss: 72Mb L: 38/40 MS: 1 ChangeByte- 00:06:47.081 [2024-05-14 11:43:14.091880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.081 [2024-05-14 11:43:14.091908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.081 #50 NEW cov: 12048 ft: 15152 corp: 30/670b lim: 40 exec/s: 50 rss: 72Mb L: 13/40 MS: 1 CrossOver- 00:06:47.081 [2024-05-14 11:43:14.132500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.081 [2024-05-14 11:43:14.132527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.081 [2024-05-14 11:43:14.132661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00850000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.081 [2024-05-14 11:43:14.132678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.081 [2024-05-14 11:43:14.132800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.081 [2024-05-14 11:43:14.132817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.081 [2024-05-14 11:43:14.132950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:582e1cbb cdw11:b2000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.081 [2024-05-14 11:43:14.132971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.081 #51 NEW cov: 12048 ft: 15154 corp: 31/705b lim: 40 exec/s: 51 rss: 72Mb L: 35/40 MS: 1 EraseBytes- 00:06:47.340 [2024-05-14 11:43:14.172017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a510a51 cdw11:fafa19fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.172044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.340 #52 NEW cov: 12048 ft: 15172 corp: 32/715b lim: 40 exec/s: 52 rss: 72Mb L: 10/40 MS: 1 ShuffleBytes- 00:06:47.340 [2024-05-14 11:43:14.212623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.212650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.340 [2024-05-14 11:43:14.212777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.212796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.340 [2024-05-14 11:43:14.212924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78780000 cdw11:02007878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.212942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.340 #53 NEW cov: 12048 ft: 15178 corp: 33/746b lim: 40 exec/s: 53 rss: 72Mb L: 31/40 MS: 1 CMP- DE: "\000\000\002\000"- 00:06:47.340 [2024-05-14 11:43:14.252703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.252731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.340 [2024-05-14 11:43:14.252867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.252885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.340 [2024-05-14 11:43:14.253010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.253027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.340 #54 NEW cov: 12048 ft: 15182 corp: 34/777b lim: 40 exec/s: 54 rss: 72Mb L: 31/40 MS: 1 ChangeBit- 00:06:47.340 [2024-05-14 11:43:14.292823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.292850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.340 [2024-05-14 11:43:14.292990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.293008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.340 [2024-05-14 11:43:14.293138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.293156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.340 #55 NEW cov: 12048 ft: 15195 corp: 35/808b lim: 40 exec/s: 55 rss: 72Mb L: 31/40 MS: 1 CopyPart- 00:06:47.340 [2024-05-14 11:43:14.333119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a787878 cdw11:78ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.333145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.340 [2024-05-14 11:43:14.333279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.333297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.340 [2024-05-14 11:43:14.333441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78787878 cdw11:78927878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.333461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.340 [2024-05-14 11:43:14.333582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.333600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.340 #56 NEW cov: 12048 ft: 15211 corp: 36/843b lim: 40 exec/s: 56 rss: 72Mb L: 35/40 MS: 1 ChangeBinInt- 00:06:47.340 [2024-05-14 11:43:14.373225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff6b6b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.373251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.340 [2024-05-14 11:43:14.373384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:6b6b6b6b cdw11:6b6b6b6b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.373401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.340 [2024-05-14 11:43:14.373530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:6b6b6b6b cdw11:6b6b6b6b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.373546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.340 [2024-05-14 11:43:14.373670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:6b6b6b6b cdw11:6bffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.373689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:47.340 #57 NEW cov: 12048 ft: 15236 corp: 37/876b lim: 40 exec/s: 57 rss: 72Mb L: 33/40 MS: 1 InsertRepeatedBytes- 00:06:47.340 [2024-05-14 11:43:14.413164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a787888 cdw11:8c787830 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.413191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.340 [2024-05-14 11:43:14.413329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.413346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.340 [2024-05-14 11:43:14.413478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.340 [2024-05-14 11:43:14.413499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.600 #58 NEW cov: 12048 ft: 15245 corp: 38/907b lim: 40 exec/s: 58 rss: 73Mb L: 31/40 MS: 1 ChangeByte- 00:06:47.600 [2024-05-14 11:43:14.463361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.600 [2024-05-14 11:43:14.463392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.600 [2024-05-14 11:43:14.463526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.600 [2024-05-14 11:43:14.463542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.600 [2024-05-14 11:43:14.463663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78780000 cdw11:02007878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.600 [2024-05-14 11:43:14.463691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.600 #59 NEW cov: 12048 ft: 15251 corp: 39/938b lim: 40 exec/s: 59 rss: 73Mb L: 31/40 MS: 1 ShuffleBytes- 00:06:47.600 [2024-05-14 11:43:14.513467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:8a787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.600 [2024-05-14 11:43:14.513493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:47.600 [2024-05-14 11:43:14.513627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:78787878 cdw11:78787878 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.600 [2024-05-14 11:43:14.513645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:47.600 [2024-05-14 11:43:14.513763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:78787878 cdw11:7878787c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:47.600 [2024-05-14 11:43:14.513781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:47.600 #60 NEW cov: 12048 ft: 15258 corp: 40/969b lim: 40 exec/s: 30 rss: 73Mb L: 31/40 MS: 1 ChangeByte- 00:06:47.600 #60 DONE cov: 12048 ft: 15258 corp: 40/969b lim: 40 exec/s: 30 rss: 73Mb 00:06:47.600 ###### Recommended dictionary. ###### 00:06:47.600 "\377\204=X.\034\273\262" # Uses: 1 00:06:47.600 "\377\031" # Uses: 0 00:06:47.600 "\000\000\177\355\310\016T\361" # Uses: 0 00:06:47.600 "\000\000\002\000" # Uses: 0 00:06:47.600 ###### End of recommended dictionary. ###### 00:06:47.600 Done 60 runs in 2 second(s) 00:06:47.600 [2024-05-14 11:43:14.534109] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4414 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:47.600 11:43:14 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:06:47.859 [2024-05-14 11:43:14.701505] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:47.859 [2024-05-14 11:43:14.701582] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639286 ] 00:06:47.859 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.859 [2024-05-14 11:43:14.879777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.859 [2024-05-14 11:43:14.945755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.118 [2024-05-14 11:43:15.004520] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.118 [2024-05-14 11:43:15.020476] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:48.118 [2024-05-14 11:43:15.020838] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:06:48.118 INFO: Running with entropic power schedule (0xFF, 100). 00:06:48.118 INFO: Seed: 4222634035 00:06:48.118 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:48.118 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:48.118 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:48.118 INFO: A corpus is not provided, starting from an empty corpus 00:06:48.118 #2 INITED exec/s: 0 rss: 63Mb 00:06:48.118 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:48.118 This may also happen if the target rejected all inputs we tried so far 00:06:48.118 [2024-05-14 11:43:15.065574] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.118 [2024-05-14 11:43:15.065607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.118 [2024-05-14 11:43:15.065655] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.118 [2024-05-14 11:43:15.065672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.377 NEW_FUNC[1/687]: 0x495df0 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:06:48.377 NEW_FUNC[2/687]: 0x4b72b0 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:48.377 #3 NEW cov: 11803 ft: 11809 corp: 2/18b lim: 35 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:06:48.377 [2024-05-14 11:43:15.416378] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.377 [2024-05-14 11:43:15.416429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.377 [2024-05-14 11:43:15.416479] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.377 [2024-05-14 11:43:15.416495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.636 #9 NEW cov: 11938 ft: 12230 corp: 3/35b lim: 35 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 ChangeBinInt- 00:06:48.636 [2024-05-14 11:43:15.486447] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.636 [2024-05-14 11:43:15.486479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.636 #10 NEW cov: 11944 ft: 13179 corp: 4/45b lim: 35 exec/s: 0 rss: 70Mb L: 10/17 MS: 1 EraseBytes- 00:06:48.636 [2024-05-14 11:43:15.556649] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.636 [2024-05-14 11:43:15.556679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.636 [2024-05-14 11:43:15.556727] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.636 [2024-05-14 11:43:15.556744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.636 #11 NEW cov: 12029 ft: 13472 corp: 5/62b lim: 35 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 CrossOver- 00:06:48.636 [2024-05-14 11:43:15.606811] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.636 [2024-05-14 11:43:15.606843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.636 [2024-05-14 11:43:15.606876] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.636 [2024-05-14 11:43:15.606893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.636 #12 NEW cov: 12029 ft: 13729 corp: 6/79b lim: 35 exec/s: 0 rss: 70Mb L: 17/17 MS: 1 ChangeByte- 00:06:48.636 NEW_FUNC[1/2]: 0x4b0780 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:06:48.636 NEW_FUNC[2/2]: 0x1190ef0 in nvmf_ctrlr_set_features_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1597 00:06:48.636 #14 NEW cov: 12086 ft: 13946 corp: 7/88b lim: 35 exec/s: 0 rss: 70Mb L: 9/17 MS: 2 ShuffleBytes-CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:48.895 [2024-05-14 11:43:15.727073] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.895 [2024-05-14 11:43:15.727106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.895 #17 NEW cov: 12086 ft: 14087 corp: 8/98b lim: 35 exec/s: 0 rss: 70Mb L: 10/17 MS: 3 ShuffleBytes-InsertByte-CMP- DE: ";\226y\014Z=\205\000"- 00:06:48.895 [2024-05-14 11:43:15.777330] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.895 [2024-05-14 11:43:15.777361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.895 [2024-05-14 11:43:15.777415] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.895 [2024-05-14 11:43:15.777432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.895 [2024-05-14 11:43:15.777465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:6 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.895 [2024-05-14 11:43:15.777481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.895 [2024-05-14 11:43:15.777511] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.895 [2024-05-14 11:43:15.777527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.895 #18 NEW cov: 12086 ft: 14425 corp: 9/132b lim: 35 exec/s: 0 rss: 71Mb L: 34/34 MS: 1 CopyPart- 00:06:48.895 [2024-05-14 11:43:15.847501] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.895 [2024-05-14 11:43:15.847532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.895 [2024-05-14 11:43:15.847565] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES AUTONOMOUS POWER STATE TRANSITION cid:5 cdw10:0000000c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.895 [2024-05-14 11:43:15.847581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.895 [2024-05-14 11:43:15.847612] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.895 [2024-05-14 11:43:15.847628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.895 #19 NEW cov: 12093 ft: 14648 corp: 10/157b lim: 35 exec/s: 0 rss: 71Mb L: 25/34 MS: 1 PersAutoDict- DE: ";\226y\014Z=\205\000"- 00:06:48.895 [2024-05-14 11:43:15.907794] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.896 [2024-05-14 11:43:15.907825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.896 [2024-05-14 11:43:15.907860] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.896 [2024-05-14 11:43:15.907877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:48.896 [2024-05-14 11:43:15.907907] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.896 [2024-05-14 11:43:15.907922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:48.896 [2024-05-14 11:43:15.907951] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000008c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.896 [2024-05-14 11:43:15.907967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:48.896 [2024-05-14 11:43:15.907996] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.896 [2024-05-14 11:43:15.908012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:48.896 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:48.896 #20 NEW cov: 12110 ft: 14811 corp: 11/192b lim: 35 exec/s: 0 rss: 71Mb L: 35/35 MS: 1 InsertByte- 00:06:48.896 [2024-05-14 11:43:15.977791] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.896 [2024-05-14 11:43:15.977822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:48.896 [2024-05-14 11:43:15.977859] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:48.896 [2024-05-14 11:43:15.977876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.155 #21 NEW cov: 12110 ft: 14827 corp: 12/210b lim: 35 exec/s: 0 rss: 71Mb L: 18/35 MS: 1 InsertByte- 00:06:49.155 [2024-05-14 11:43:16.028103] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.155 [2024-05-14 11:43:16.028133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.155 [2024-05-14 11:43:16.028167] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.155 [2024-05-14 11:43:16.028184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.155 [2024-05-14 11:43:16.028213] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.155 [2024-05-14 11:43:16.028229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.155 [2024-05-14 11:43:16.028258] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000008c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.155 [2024-05-14 11:43:16.028274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.155 [2024-05-14 11:43:16.028303] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.155 [2024-05-14 11:43:16.028319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.155 #22 NEW cov: 12110 ft: 14892 corp: 13/245b lim: 35 exec/s: 22 rss: 71Mb L: 35/35 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:49.155 #23 NEW cov: 12110 ft: 14906 corp: 14/254b lim: 35 exec/s: 23 rss: 71Mb L: 9/35 MS: 1 ChangeByte- 00:06:49.155 [2024-05-14 11:43:16.168191] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.155 [2024-05-14 11:43:16.168220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.155 [2024-05-14 11:43:16.168268] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.155 [2024-05-14 11:43:16.168284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.155 #24 NEW cov: 12110 ft: 14918 corp: 15/271b lim: 35 exec/s: 24 rss: 71Mb L: 17/35 MS: 1 ShuffleBytes- 00:06:49.155 [2024-05-14 11:43:16.218550] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.155 [2024-05-14 11:43:16.218581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.155 [2024-05-14 11:43:16.218614] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.155 [2024-05-14 11:43:16.218630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.155 [2024-05-14 11:43:16.218660] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.155 [2024-05-14 11:43:16.218675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.155 [2024-05-14 11:43:16.218709] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000008c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.155 [2024-05-14 11:43:16.218725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.155 [2024-05-14 11:43:16.218754] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.155 [2024-05-14 11:43:16.218769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.414 #25 NEW cov: 12110 ft: 14945 corp: 16/306b lim: 35 exec/s: 25 rss: 71Mb L: 35/35 MS: 1 ChangeBit- 00:06:49.414 [2024-05-14 11:43:16.268504] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.414 [2024-05-14 11:43:16.268533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.414 [2024-05-14 11:43:16.268582] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.414 [2024-05-14 11:43:16.268597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.414 #26 NEW cov: 12110 ft: 14964 corp: 17/323b lim: 35 exec/s: 26 rss: 71Mb L: 17/35 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\000"- 00:06:49.414 [2024-05-14 11:43:16.338718] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.414 [2024-05-14 11:43:16.338748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.414 [2024-05-14 11:43:16.338781] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.414 [2024-05-14 11:43:16.338797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.414 #27 NEW cov: 12110 ft: 15047 corp: 18/340b lim: 35 exec/s: 27 rss: 71Mb L: 17/35 MS: 1 ChangeByte- 00:06:49.414 [2024-05-14 11:43:16.388991] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.414 [2024-05-14 11:43:16.389020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.414 [2024-05-14 11:43:16.389068] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.414 [2024-05-14 11:43:16.389084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.414 [2024-05-14 11:43:16.389114] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.414 [2024-05-14 11:43:16.389129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.414 [2024-05-14 11:43:16.389158] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000008c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.414 [2024-05-14 11:43:16.389174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.414 [2024-05-14 11:43:16.389203] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.414 [2024-05-14 11:43:16.389218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.414 #28 NEW cov: 12110 ft: 15053 corp: 19/375b lim: 35 exec/s: 28 rss: 71Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:49.414 [2024-05-14 11:43:16.438948] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.414 [2024-05-14 11:43:16.438982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.414 [2024-05-14 11:43:16.439016] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.414 [2024-05-14 11:43:16.439032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.414 #29 NEW cov: 12110 ft: 15064 corp: 20/393b lim: 35 exec/s: 29 rss: 71Mb L: 18/35 MS: 1 CopyPart- 00:06:49.673 [2024-05-14 11:43:16.509104] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.673 [2024-05-14 11:43:16.509134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.673 #30 NEW cov: 12110 ft: 15086 corp: 21/406b lim: 35 exec/s: 30 rss: 71Mb L: 13/35 MS: 1 EraseBytes- 00:06:49.673 [2024-05-14 11:43:16.559194] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.673 [2024-05-14 11:43:16.559223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.673 #31 NEW cov: 12110 ft: 15116 corp: 22/416b lim: 35 exec/s: 31 rss: 71Mb L: 10/35 MS: 1 ShuffleBytes- 00:06:49.673 [2024-05-14 11:43:16.610316] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.673 [2024-05-14 11:43:16.610346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.673 [2024-05-14 11:43:16.610421] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES AUTONOMOUS POWER STATE TRANSITION cid:5 cdw10:0000000c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.673 [2024-05-14 11:43:16.610436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.673 [2024-05-14 11:43:16.610498] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:8000008c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.673 [2024-05-14 11:43:16.610513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.673 #32 NEW cov: 12110 ft: 15203 corp: 23/441b lim: 35 exec/s: 32 rss: 71Mb L: 25/35 MS: 1 ChangeBit- 00:06:49.673 #33 NEW cov: 12110 ft: 15301 corp: 24/450b lim: 35 exec/s: 33 rss: 71Mb L: 9/35 MS: 1 ChangeBinInt- 00:06:49.673 [2024-05-14 11:43:16.700249] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.673 [2024-05-14 11:43:16.700276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.673 #34 NEW cov: 12110 ft: 15314 corp: 25/457b lim: 35 exec/s: 34 rss: 71Mb L: 7/35 MS: 1 EraseBytes- 00:06:49.673 [2024-05-14 11:43:16.750583] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.673 [2024-05-14 11:43:16.750613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.673 [2024-05-14 11:43:16.750673] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.673 [2024-05-14 11:43:16.750690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.932 #35 NEW cov: 12110 ft: 15371 corp: 26/475b lim: 35 exec/s: 35 rss: 71Mb L: 18/35 MS: 1 InsertByte- 00:06:49.932 [2024-05-14 11:43:16.810695] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000003b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.932 [2024-05-14 11:43:16.810729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.932 [2024-05-14 11:43:16.810786] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000096 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.932 [2024-05-14 11:43:16.810800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.932 #36 NEW cov: 12110 ft: 15383 corp: 27/491b lim: 35 exec/s: 36 rss: 71Mb L: 16/35 MS: 1 CopyPart- 00:06:49.932 [2024-05-14 11:43:16.851279] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.932 [2024-05-14 11:43:16.851308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.932 [2024-05-14 11:43:16.851368] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.932 [2024-05-14 11:43:16.851388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.932 [2024-05-14 11:43:16.851445] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.932 [2024-05-14 11:43:16.851458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:49.932 [2024-05-14 11:43:16.851516] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000008c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.932 [2024-05-14 11:43:16.851532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:49.932 [2024-05-14 11:43:16.851591] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.932 [2024-05-14 11:43:16.851607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:49.932 #37 NEW cov: 12110 ft: 15415 corp: 28/526b lim: 35 exec/s: 37 rss: 71Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:49.932 #43 NEW cov: 12110 ft: 15461 corp: 29/535b lim: 35 exec/s: 43 rss: 71Mb L: 9/35 MS: 1 ShuffleBytes- 00:06:49.932 [2024-05-14 11:43:16.941026] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.932 [2024-05-14 11:43:16.941053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:49.932 [2024-05-14 11:43:16.941129] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000084 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:49.932 [2024-05-14 11:43:16.941146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:49.932 #44 NEW cov: 12117 ft: 15496 corp: 30/552b lim: 35 exec/s: 44 rss: 71Mb L: 17/35 MS: 1 ChangeBinInt- 00:06:49.932 #45 NEW cov: 12117 ft: 15521 corp: 31/562b lim: 35 exec/s: 45 rss: 71Mb L: 10/35 MS: 1 InsertByte- 00:06:50.191 [2024-05-14 11:43:17.031277] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.191 [2024-05-14 11:43:17.031304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.191 [2024-05-14 11:43:17.031378] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.191 [2024-05-14 11:43:17.031398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.191 #46 NEW cov: 12117 ft: 15553 corp: 32/579b lim: 35 exec/s: 46 rss: 71Mb L: 17/35 MS: 1 ChangeBit- 00:06:50.191 [2024-05-14 11:43:17.071439] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.191 [2024-05-14 11:43:17.071467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.191 [2024-05-14 11:43:17.071526] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:50.191 [2024-05-14 11:43:17.071542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.191 #47 NEW cov: 12117 ft: 15556 corp: 33/596b lim: 35 exec/s: 23 rss: 71Mb L: 17/35 MS: 1 ChangeBit- 00:06:50.192 #47 DONE cov: 12117 ft: 15556 corp: 33/596b lim: 35 exec/s: 23 rss: 71Mb 00:06:50.192 ###### Recommended dictionary. ###### 00:06:50.192 "\001\000\000\000\000\000\000\000" # Uses: 0 00:06:50.192 ";\226y\014Z=\205\000" # Uses: 1 00:06:50.192 "\377\377\377\377\377\377\377\000" # Uses: 0 00:06:50.192 ###### End of recommended dictionary. ###### 00:06:50.192 Done 47 runs in 2 second(s) 00:06:50.192 [2024-05-14 11:43:17.100075] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4415 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:50.192 11:43:17 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:06:50.192 [2024-05-14 11:43:17.269394] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:50.192 [2024-05-14 11:43:17.269462] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639819 ] 00:06:50.450 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.450 [2024-05-14 11:43:17.466288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.450 [2024-05-14 11:43:17.532133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.709 [2024-05-14 11:43:17.591077] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.709 [2024-05-14 11:43:17.607033] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:50.709 [2024-05-14 11:43:17.607428] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:06:50.709 INFO: Running with entropic power schedule (0xFF, 100). 00:06:50.709 INFO: Seed: 2511667435 00:06:50.709 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:50.709 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:50.709 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:50.709 INFO: A corpus is not provided, starting from an empty corpus 00:06:50.709 #2 INITED exec/s: 0 rss: 63Mb 00:06:50.709 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:50.709 This may also happen if the target rejected all inputs we tried so far 00:06:50.967 NEW_FUNC[1/671]: 0x497330 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:06:50.967 NEW_FUNC[2/671]: 0x4b72b0 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:50.967 #19 NEW cov: 11670 ft: 11672 corp: 2/12b lim: 35 exec/s: 0 rss: 70Mb L: 11/11 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:50.967 [2024-05-14 11:43:17.986745] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.967 [2024-05-14 11:43:17.986779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:50.967 [2024-05-14 11:43:17.986853] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:50.967 [2024-05-14 11:43:17.986867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:50.967 NEW_FUNC[1/15]: 0xef73d0 in rte_get_timer_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/generic/rte_cycles.h:94 00:06:50.967 NEW_FUNC[2/15]: 0x1719e50 in spdk_nvme_print_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:263 00:06:50.967 #22 NEW cov: 11930 ft: 12451 corp: 3/27b lim: 35 exec/s: 0 rss: 70Mb L: 15/15 MS: 3 CMP-ChangeByte-InsertRepeatedBytes- DE: "\014\000\000\000"- 00:06:50.967 #23 NEW cov: 11936 ft: 12743 corp: 4/39b lim: 35 exec/s: 0 rss: 70Mb L: 12/15 MS: 1 InsertByte- 00:06:51.226 [2024-05-14 11:43:18.066752] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ba SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.226 [2024-05-14 11:43:18.066778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.226 #33 NEW cov: 12021 ft: 13222 corp: 5/46b lim: 35 exec/s: 0 rss: 70Mb L: 7/15 MS: 5 CopyPart-ChangeByte-ShuffleBytes-PersAutoDict-InsertByte- DE: "\014\000\000\000"- 00:06:51.226 #34 NEW cov: 12021 ft: 13286 corp: 6/58b lim: 35 exec/s: 0 rss: 70Mb L: 12/15 MS: 1 ChangeBinInt- 00:06:51.226 [2024-05-14 11:43:18.147121] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.226 [2024-05-14 11:43:18.147146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.226 [2024-05-14 11:43:18.147205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.226 [2024-05-14 11:43:18.147219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.226 #35 NEW cov: 12021 ft: 13364 corp: 7/73b lim: 35 exec/s: 0 rss: 70Mb L: 15/15 MS: 1 ChangeBinInt- 00:06:51.226 NEW_FUNC[1/1]: 0x4b1af0 in feat_power_management /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:282 00:06:51.226 #36 NEW cov: 12044 ft: 13705 corp: 8/95b lim: 35 exec/s: 0 rss: 70Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:06:51.226 [2024-05-14 11:43:18.227251] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES AUTONOMOUS POWER STATE TRANSITION cid:4 cdw10:0000000c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.226 [2024-05-14 11:43:18.227277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.226 #46 NEW cov: 12044 ft: 13749 corp: 9/104b lim: 35 exec/s: 0 rss: 70Mb L: 9/22 MS: 5 PersAutoDict-InsertByte-EraseBytes-InsertByte-PersAutoDict- DE: "\014\000\000\000"-"\014\000\000\000"- 00:06:51.226 [2024-05-14 11:43:18.257652] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000005bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.226 [2024-05-14 11:43:18.257678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.226 #47 NEW cov: 12044 ft: 13824 corp: 10/131b lim: 35 exec/s: 0 rss: 70Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:06:51.226 [2024-05-14 11:43:18.307481] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES AUTONOMOUS POWER STATE TRANSITION cid:4 cdw10:0000000c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.226 [2024-05-14 11:43:18.307507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.486 #48 NEW cov: 12044 ft: 13887 corp: 11/141b lim: 35 exec/s: 0 rss: 70Mb L: 10/27 MS: 1 InsertByte- 00:06:51.486 #49 NEW cov: 12044 ft: 13941 corp: 12/153b lim: 35 exec/s: 0 rss: 70Mb L: 12/27 MS: 1 CMP- DE: "\000\000"- 00:06:51.486 [2024-05-14 11:43:18.387769] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.486 [2024-05-14 11:43:18.387794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.486 [2024-05-14 11:43:18.387853] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.486 [2024-05-14 11:43:18.387867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.486 #50 NEW cov: 12044 ft: 13986 corp: 13/168b lim: 35 exec/s: 0 rss: 71Mb L: 15/27 MS: 1 ShuffleBytes- 00:06:51.486 [2024-05-14 11:43:18.427884] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.486 [2024-05-14 11:43:18.427909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.486 [2024-05-14 11:43:18.427967] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.486 [2024-05-14 11:43:18.427981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.486 #51 NEW cov: 12044 ft: 14009 corp: 14/185b lim: 35 exec/s: 0 rss: 71Mb L: 17/27 MS: 1 CMP- DE: "\012\000"- 00:06:51.486 #52 NEW cov: 12044 ft: 14053 corp: 15/197b lim: 35 exec/s: 0 rss: 71Mb L: 12/27 MS: 1 ChangeBinInt- 00:06:51.486 [2024-05-14 11:43:18.508126] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.486 [2024-05-14 11:43:18.508152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.486 [2024-05-14 11:43:18.508211] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.486 [2024-05-14 11:43:18.508225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.486 #53 NEW cov: 12044 ft: 14068 corp: 16/212b lim: 35 exec/s: 0 rss: 71Mb L: 15/27 MS: 1 ChangeBinInt- 00:06:51.486 [2024-05-14 11:43:18.548458] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.486 [2024-05-14 11:43:18.548483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.745 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:51.745 #54 NEW cov: 12067 ft: 14136 corp: 17/239b lim: 35 exec/s: 0 rss: 71Mb L: 27/27 MS: 1 PersAutoDict- DE: "\000\000"- 00:06:51.745 [2024-05-14 11:43:18.598525] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.745 [2024-05-14 11:43:18.598551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.745 [2024-05-14 11:43:18.598629] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.745 [2024-05-14 11:43:18.598643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.745 [2024-05-14 11:43:18.598704] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.746 [2024-05-14 11:43:18.598717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.746 #55 NEW cov: 12067 ft: 14256 corp: 18/262b lim: 35 exec/s: 0 rss: 71Mb L: 23/27 MS: 1 CopyPart- 00:06:51.746 [2024-05-14 11:43:18.638352] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES AUTONOMOUS POWER STATE TRANSITION cid:4 cdw10:0000000c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.746 [2024-05-14 11:43:18.638378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.746 #56 NEW cov: 12067 ft: 14300 corp: 19/273b lim: 35 exec/s: 56 rss: 71Mb L: 11/27 MS: 1 CMP- DE: "\011\000"- 00:06:51.746 [2024-05-14 11:43:18.678635] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000378 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.746 [2024-05-14 11:43:18.678661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.746 #57 NEW cov: 12067 ft: 14309 corp: 20/287b lim: 35 exec/s: 57 rss: 71Mb L: 14/27 MS: 1 PersAutoDict- DE: "\011\000"- 00:06:51.746 [2024-05-14 11:43:18.718969] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.746 [2024-05-14 11:43:18.718994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.746 [2024-05-14 11:43:18.719054] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.746 [2024-05-14 11:43:18.719067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.746 [2024-05-14 11:43:18.719125] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.746 [2024-05-14 11:43:18.719138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.746 [2024-05-14 11:43:18.719198] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000013c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.746 [2024-05-14 11:43:18.719211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:51.746 #58 NEW cov: 12067 ft: 14751 corp: 21/318b lim: 35 exec/s: 58 rss: 71Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:06:51.746 [2024-05-14 11:43:18.758982] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.746 [2024-05-14 11:43:18.759010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:51.746 [2024-05-14 11:43:18.759070] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.746 [2024-05-14 11:43:18.759083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:51.746 [2024-05-14 11:43:18.759160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.746 [2024-05-14 11:43:18.759175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:51.746 #59 NEW cov: 12067 ft: 14772 corp: 22/341b lim: 35 exec/s: 59 rss: 71Mb L: 23/31 MS: 1 ChangeBinInt- 00:06:51.746 #60 NEW cov: 12067 ft: 14792 corp: 23/348b lim: 35 exec/s: 60 rss: 71Mb L: 7/31 MS: 1 EraseBytes- 00:06:52.005 NEW_FUNC[1/1]: 0x4b5c80 in feat_interrupt_coalescing /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:325 00:06:52.005 #61 NEW cov: 12089 ft: 14825 corp: 24/360b lim: 35 exec/s: 61 rss: 71Mb L: 12/31 MS: 1 ChangeBit- 00:06:52.005 #62 NEW cov: 12089 ft: 14837 corp: 25/382b lim: 35 exec/s: 62 rss: 71Mb L: 22/31 MS: 1 ChangeBit- 00:06:52.005 #63 NEW cov: 12089 ft: 14860 corp: 26/389b lim: 35 exec/s: 63 rss: 72Mb L: 7/31 MS: 1 CopyPart- 00:06:52.005 #64 NEW cov: 12089 ft: 14870 corp: 27/401b lim: 35 exec/s: 64 rss: 72Mb L: 12/31 MS: 1 PersAutoDict- DE: "\011\000"- 00:06:52.005 #65 NEW cov: 12089 ft: 14908 corp: 28/413b lim: 35 exec/s: 65 rss: 72Mb L: 12/31 MS: 1 ChangeByte- 00:06:52.005 [2024-05-14 11:43:19.049743] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.005 [2024-05-14 11:43:19.049769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.005 [2024-05-14 11:43:19.049828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.005 [2024-05-14 11:43:19.049842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.005 #66 NEW cov: 12089 ft: 14927 corp: 29/428b lim: 35 exec/s: 66 rss: 72Mb L: 15/31 MS: 1 ShuffleBytes- 00:06:52.005 [2024-05-14 11:43:19.090086] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000ba SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.005 [2024-05-14 11:43:19.090112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.005 [2024-05-14 11:43:19.090178] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.005 [2024-05-14 11:43:19.090192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.005 [2024-05-14 11:43:19.090255] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.005 [2024-05-14 11:43:19.090269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.005 [2024-05-14 11:43:19.090332] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.005 [2024-05-14 11:43:19.090346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.265 #72 NEW cov: 12089 ft: 14963 corp: 30/460b lim: 35 exec/s: 72 rss: 72Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:06:52.265 [2024-05-14 11:43:19.139985] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.265 [2024-05-14 11:43:19.140011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.265 [2024-05-14 11:43:19.140074] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.265 [2024-05-14 11:43:19.140089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.265 #73 NEW cov: 12089 ft: 14971 corp: 31/475b lim: 35 exec/s: 73 rss: 72Mb L: 15/32 MS: 1 ChangeBinInt- 00:06:52.265 [2024-05-14 11:43:19.180201] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.265 [2024-05-14 11:43:19.180226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.265 [2024-05-14 11:43:19.180290] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.265 [2024-05-14 11:43:19.180304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.265 [2024-05-14 11:43:19.180366] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000072c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.265 [2024-05-14 11:43:19.180383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.265 #74 NEW cov: 12089 ft: 14981 corp: 32/498b lim: 35 exec/s: 74 rss: 72Mb L: 23/32 MS: 1 ChangeByte- 00:06:52.265 [2024-05-14 11:43:19.220189] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.265 [2024-05-14 11:43:19.220214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.265 [2024-05-14 11:43:19.220276] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.265 [2024-05-14 11:43:19.220289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.265 #75 NEW cov: 12089 ft: 14986 corp: 33/515b lim: 35 exec/s: 75 rss: 72Mb L: 17/32 MS: 1 ShuffleBytes- 00:06:52.265 [2024-05-14 11:43:19.260274] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.265 [2024-05-14 11:43:19.260300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.265 [2024-05-14 11:43:19.260375] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.265 [2024-05-14 11:43:19.260394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.265 #76 NEW cov: 12089 ft: 15019 corp: 34/532b lim: 35 exec/s: 76 rss: 72Mb L: 17/32 MS: 1 PersAutoDict- DE: "\011\000"- 00:06:52.265 #80 NEW cov: 12089 ft: 15034 corp: 35/540b lim: 35 exec/s: 80 rss: 72Mb L: 8/32 MS: 4 EraseBytes-ShuffleBytes-ChangeBit-PersAutoDict- DE: "\011\000"- 00:06:52.265 [2024-05-14 11:43:19.340901] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.265 [2024-05-14 11:43:19.340927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.265 [2024-05-14 11:43:19.340988] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.265 [2024-05-14 11:43:19.341002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.265 [2024-05-14 11:43:19.341062] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000007e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.265 [2024-05-14 11:43:19.341076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.265 [2024-05-14 11:43:19.341139] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000013c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.265 [2024-05-14 11:43:19.341152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:52.265 [2024-05-14 11:43:19.341210] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:0000013c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.265 [2024-05-14 11:43:19.341224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:52.524 #81 NEW cov: 12089 ft: 15068 corp: 36/575b lim: 35 exec/s: 81 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:06:52.524 [2024-05-14 11:43:19.390559] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.524 [2024-05-14 11:43:19.390585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.524 #85 NEW cov: 12089 ft: 15087 corp: 37/587b lim: 35 exec/s: 85 rss: 72Mb L: 12/35 MS: 4 EraseBytes-CMP-EraseBytes-CMP- DE: "\377\001"-"\001\000\000\000\000\000\000\000"- 00:06:52.524 [2024-05-14 11:43:19.430795] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.524 [2024-05-14 11:43:19.430820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.524 [2024-05-14 11:43:19.430879] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.524 [2024-05-14 11:43:19.430892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.524 #86 NEW cov: 12089 ft: 15113 corp: 38/604b lim: 35 exec/s: 86 rss: 73Mb L: 17/35 MS: 1 CopyPart- 00:06:52.524 [2024-05-14 11:43:19.470876] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.524 [2024-05-14 11:43:19.470901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.524 NEW_FUNC[1/1]: 0x4bb240 in feat_keep_alive_timer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:364 00:06:52.524 #87 NEW cov: 12108 ft: 15142 corp: 39/619b lim: 35 exec/s: 87 rss: 73Mb L: 15/35 MS: 1 CrossOver- 00:06:52.524 [2024-05-14 11:43:19.511130] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.524 [2024-05-14 11:43:19.511154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.524 [2024-05-14 11:43:19.511212] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.524 [2024-05-14 11:43:19.511226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.524 #93 NEW cov: 12108 ft: 15152 corp: 40/646b lim: 35 exec/s: 93 rss: 73Mb L: 27/35 MS: 1 CopyPart- 00:06:52.524 #94 NEW cov: 12108 ft: 15194 corp: 41/657b lim: 35 exec/s: 94 rss: 73Mb L: 11/35 MS: 1 ChangeByte- 00:06:52.524 [2024-05-14 11:43:19.591331] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.524 [2024-05-14 11:43:19.591356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.524 [2024-05-14 11:43:19.591418] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007fc SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.524 [2024-05-14 11:43:19.591432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.524 [2024-05-14 11:43:19.591509] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000072c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.524 [2024-05-14 11:43:19.591523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.783 #95 NEW cov: 12108 ft: 15205 corp: 42/680b lim: 35 exec/s: 95 rss: 73Mb L: 23/35 MS: 1 ChangeBit- 00:06:52.783 [2024-05-14 11:43:19.641509] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000031 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.783 [2024-05-14 11:43:19.641534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:52.783 [2024-05-14 11:43:19.641612] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000078 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.783 [2024-05-14 11:43:19.641627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:52.783 [2024-05-14 11:43:19.641690] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000796 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.783 [2024-05-14 11:43:19.641703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:52.783 #96 NEW cov: 12108 ft: 15215 corp: 43/703b lim: 35 exec/s: 48 rss: 73Mb L: 23/35 MS: 1 CrossOver- 00:06:52.783 #96 DONE cov: 12108 ft: 15215 corp: 43/703b lim: 35 exec/s: 48 rss: 73Mb 00:06:52.783 ###### Recommended dictionary. ###### 00:06:52.783 "\014\000\000\000" # Uses: 4 00:06:52.783 "\000\000" # Uses: 1 00:06:52.783 "\012\000" # Uses: 0 00:06:52.783 "\011\000" # Uses: 4 00:06:52.783 "\377\001" # Uses: 0 00:06:52.783 "\001\000\000\000\000\000\000\000" # Uses: 0 00:06:52.783 ###### End of recommended dictionary. ###### 00:06:52.783 Done 96 runs in 2 second(s) 00:06:52.783 [2024-05-14 11:43:19.662530] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:52.783 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:06:52.783 11:43:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:52.783 11:43:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:52.783 11:43:19 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:06:52.783 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:06:52.783 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:52.783 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:52.783 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:52.783 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:06:52.783 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:52.783 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:52.783 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:06:52.784 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4416 00:06:52.784 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:52.784 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:06:52.784 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:52.784 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:52.784 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:52.784 11:43:19 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:06:52.784 [2024-05-14 11:43:19.827802] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:52.784 [2024-05-14 11:43:19.827874] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640126 ] 00:06:52.784 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.042 [2024-05-14 11:43:20.013563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.042 [2024-05-14 11:43:20.088434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.301 [2024-05-14 11:43:20.148008] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.301 [2024-05-14 11:43:20.163964] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:53.301 [2024-05-14 11:43:20.164371] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:06:53.301 INFO: Running with entropic power schedule (0xFF, 100). 00:06:53.301 INFO: Seed: 774733743 00:06:53.301 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:53.301 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:53.301 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:53.301 INFO: A corpus is not provided, starting from an empty corpus 00:06:53.301 #2 INITED exec/s: 0 rss: 63Mb 00:06:53.301 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:53.301 This may also happen if the target rejected all inputs we tried so far 00:06:53.301 [2024-05-14 11:43:20.229564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12370169555311111083 len:43948 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.301 [2024-05-14 11:43:20.229595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.301 [2024-05-14 11:43:20.229640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12370169555311111083 len:43948 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.301 [2024-05-14 11:43:20.229658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.559 NEW_FUNC[1/686]: 0x4987e0 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:06:53.559 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:53.559 #3 NEW cov: 11890 ft: 11891 corp: 2/58b lim: 105 exec/s: 0 rss: 70Mb L: 57/57 MS: 1 InsertRepeatedBytes- 00:06:53.559 [2024-05-14 11:43:20.570558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.559 [2024-05-14 11:43:20.570601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.559 [2024-05-14 11:43:20.570666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.559 [2024-05-14 11:43:20.570686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.559 [2024-05-14 11:43:20.570747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.559 [2024-05-14 11:43:20.570767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.559 #4 NEW cov: 12020 ft: 12722 corp: 3/123b lim: 105 exec/s: 0 rss: 71Mb L: 65/65 MS: 1 InsertRepeatedBytes- 00:06:53.559 [2024-05-14 11:43:20.610417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.559 [2024-05-14 11:43:20.610446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.559 [2024-05-14 11:43:20.610480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.559 [2024-05-14 11:43:20.610497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.559 #5 NEW cov: 12026 ft: 12870 corp: 4/183b lim: 105 exec/s: 0 rss: 71Mb L: 60/65 MS: 1 CrossOver- 00:06:53.818 [2024-05-14 11:43:20.650533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.818 [2024-05-14 11:43:20.650561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.819 [2024-05-14 11:43:20.650594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.819 [2024-05-14 11:43:20.650611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.819 #6 NEW cov: 12111 ft: 13207 corp: 5/243b lim: 105 exec/s: 0 rss: 71Mb L: 60/65 MS: 1 CMP- DE: "\000\003"- 00:06:53.819 [2024-05-14 11:43:20.700741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.819 [2024-05-14 11:43:20.700770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.819 [2024-05-14 11:43:20.700805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744071253131263 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.819 [2024-05-14 11:43:20.700820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.819 [2024-05-14 11:43:20.700873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.819 [2024-05-14 11:43:20.700889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.819 #7 NEW cov: 12111 ft: 13295 corp: 6/316b lim: 105 exec/s: 0 rss: 71Mb L: 73/73 MS: 1 CMP- DE: "\377\377~(X\000m\225"- 00:06:53.819 [2024-05-14 11:43:20.740985] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.819 [2024-05-14 11:43:20.741013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.819 [2024-05-14 11:43:20.741075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.819 [2024-05-14 11:43:20.741091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.819 [2024-05-14 11:43:20.741145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.819 [2024-05-14 11:43:20.741162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.819 [2024-05-14 11:43:20.741217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.819 [2024-05-14 11:43:20.741232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.819 #8 NEW cov: 12111 ft: 13809 corp: 7/400b lim: 105 exec/s: 0 rss: 71Mb L: 84/84 MS: 1 InsertRepeatedBytes- 00:06:53.819 [2024-05-14 11:43:20.780835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.819 [2024-05-14 11:43:20.780863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.819 [2024-05-14 11:43:20.780911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.819 [2024-05-14 11:43:20.780927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.819 #9 NEW cov: 12111 ft: 13927 corp: 8/461b lim: 105 exec/s: 0 rss: 71Mb L: 61/84 MS: 1 InsertByte- 00:06:53.819 [2024-05-14 11:43:20.831010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.819 [2024-05-14 11:43:20.831039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.819 [2024-05-14 11:43:20.831075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.819 [2024-05-14 11:43:20.831092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.819 #10 NEW cov: 12111 ft: 14028 corp: 9/521b lim: 105 exec/s: 0 rss: 71Mb L: 60/84 MS: 1 ChangeByte- 00:06:53.819 [2024-05-14 11:43:20.871109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.819 [2024-05-14 11:43:20.871137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.819 [2024-05-14 11:43:20.871188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:53.819 [2024-05-14 11:43:20.871204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.819 #11 NEW cov: 12111 ft: 14061 corp: 10/582b lim: 105 exec/s: 0 rss: 71Mb L: 61/84 MS: 1 ShuffleBytes- 00:06:54.079 [2024-05-14 11:43:20.911345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.079 [2024-05-14 11:43:20.911373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.079 [2024-05-14 11:43:20.911424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744071253131263 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.079 [2024-05-14 11:43:20.911440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.079 [2024-05-14 11:43:20.911494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.079 [2024-05-14 11:43:20.911509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.079 #17 NEW cov: 12111 ft: 14088 corp: 11/655b lim: 105 exec/s: 0 rss: 72Mb L: 73/84 MS: 1 ChangeByte- 00:06:54.079 [2024-05-14 11:43:20.961249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12370169555311111083 len:43948 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.079 [2024-05-14 11:43:20.961276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.079 #23 NEW cov: 12111 ft: 14528 corp: 12/691b lim: 105 exec/s: 0 rss: 72Mb L: 36/84 MS: 1 EraseBytes- 00:06:54.079 [2024-05-14 11:43:21.011740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446655765071527935 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.079 [2024-05-14 11:43:21.011767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.079 [2024-05-14 11:43:21.011835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.079 [2024-05-14 11:43:21.011850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.079 [2024-05-14 11:43:21.011905] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.080 [2024-05-14 11:43:21.011919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.080 [2024-05-14 11:43:21.011973] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.080 [2024-05-14 11:43:21.011989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.080 #29 NEW cov: 12111 ft: 14577 corp: 13/778b lim: 105 exec/s: 0 rss: 72Mb L: 87/87 MS: 1 InsertRepeatedBytes- 00:06:54.080 [2024-05-14 11:43:21.061861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.080 [2024-05-14 11:43:21.061887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.080 [2024-05-14 11:43:21.061934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.080 [2024-05-14 11:43:21.061950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.080 [2024-05-14 11:43:21.062004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.080 [2024-05-14 11:43:21.062019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.080 [2024-05-14 11:43:21.062072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.080 [2024-05-14 11:43:21.062088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.080 #30 NEW cov: 12111 ft: 14651 corp: 14/862b lim: 105 exec/s: 0 rss: 72Mb L: 84/87 MS: 1 ChangeByte- 00:06:54.080 [2024-05-14 11:43:21.101748] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.080 [2024-05-14 11:43:21.101774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.080 [2024-05-14 11:43:21.101822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.080 [2024-05-14 11:43:21.101838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.080 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:54.080 #31 NEW cov: 12134 ft: 14696 corp: 15/924b lim: 105 exec/s: 0 rss: 72Mb L: 62/87 MS: 1 PersAutoDict- DE: "\000\003"- 00:06:54.080 [2024-05-14 11:43:21.141993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.080 [2024-05-14 11:43:21.142023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.080 [2024-05-14 11:43:21.142053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.080 [2024-05-14 11:43:21.142069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.080 [2024-05-14 11:43:21.142120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.080 [2024-05-14 11:43:21.142136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.080 #32 NEW cov: 12134 ft: 14730 corp: 16/989b lim: 105 exec/s: 0 rss: 72Mb L: 65/87 MS: 1 ChangeByte- 00:06:54.340 [2024-05-14 11:43:21.182184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.182212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.340 [2024-05-14 11:43:21.182263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.182279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.340 [2024-05-14 11:43:21.182333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.182348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.340 [2024-05-14 11:43:21.182406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.182420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.340 #33 NEW cov: 12134 ft: 14777 corp: 17/1073b lim: 105 exec/s: 0 rss: 72Mb L: 84/87 MS: 1 ShuffleBytes- 00:06:54.340 [2024-05-14 11:43:21.221983] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.222009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.340 #34 NEW cov: 12134 ft: 14788 corp: 18/1113b lim: 105 exec/s: 34 rss: 72Mb L: 40/87 MS: 1 EraseBytes- 00:06:54.340 [2024-05-14 11:43:21.262265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.262293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.340 [2024-05-14 11:43:21.262336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.262352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.340 #35 NEW cov: 12134 ft: 14819 corp: 19/1174b lim: 105 exec/s: 35 rss: 72Mb L: 61/87 MS: 1 CopyPart- 00:06:54.340 [2024-05-14 11:43:21.302584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.302612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.340 [2024-05-14 11:43:21.302667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.302682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.340 [2024-05-14 11:43:21.302737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.302752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.340 [2024-05-14 11:43:21.302805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.302820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.340 #36 NEW cov: 12134 ft: 14830 corp: 20/1258b lim: 105 exec/s: 36 rss: 72Mb L: 84/87 MS: 1 ChangeBit- 00:06:54.340 [2024-05-14 11:43:21.342458] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.342486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.340 [2024-05-14 11:43:21.342519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.342535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.340 #37 NEW cov: 12134 ft: 14859 corp: 21/1319b lim: 105 exec/s: 37 rss: 72Mb L: 61/87 MS: 1 InsertByte- 00:06:54.340 [2024-05-14 11:43:21.382587] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.382615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.340 [2024-05-14 11:43:21.382683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65427 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.382700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.340 #38 NEW cov: 12134 ft: 14889 corp: 22/1381b lim: 105 exec/s: 38 rss: 73Mb L: 62/87 MS: 1 InsertByte- 00:06:54.340 [2024-05-14 11:43:21.422929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.422957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.340 [2024-05-14 11:43:21.423019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.423036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.340 [2024-05-14 11:43:21.423088] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.423104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.340 [2024-05-14 11:43:21.423157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65286 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-05-14 11:43:21.423173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.600 #39 NEW cov: 12134 ft: 14947 corp: 23/1466b lim: 105 exec/s: 39 rss: 73Mb L: 85/87 MS: 1 InsertByte- 00:06:54.600 [2024-05-14 11:43:21.472875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.600 [2024-05-14 11:43:21.472903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.600 [2024-05-14 11:43:21.472952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.600 [2024-05-14 11:43:21.472968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.600 #40 NEW cov: 12134 ft: 14954 corp: 24/1528b lim: 105 exec/s: 40 rss: 73Mb L: 62/87 MS: 1 CrossOver- 00:06:54.600 [2024-05-14 11:43:21.512970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.600 [2024-05-14 11:43:21.512998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.600 [2024-05-14 11:43:21.513045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.600 [2024-05-14 11:43:21.513063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.600 #41 NEW cov: 12134 ft: 14975 corp: 25/1589b lim: 105 exec/s: 41 rss: 73Mb L: 61/87 MS: 1 ChangeBinInt- 00:06:54.600 [2024-05-14 11:43:21.553312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.600 [2024-05-14 11:43:21.553340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.600 [2024-05-14 11:43:21.553411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.600 [2024-05-14 11:43:21.553428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.600 [2024-05-14 11:43:21.553494] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:2907074031914057598 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.600 [2024-05-14 11:43:21.553510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.600 [2024-05-14 11:43:21.553564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.600 [2024-05-14 11:43:21.553580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.600 #42 NEW cov: 12134 ft: 15011 corp: 26/1682b lim: 105 exec/s: 42 rss: 73Mb L: 93/93 MS: 1 PersAutoDict- DE: "\377\377~(X\000m\225"- 00:06:54.600 [2024-05-14 11:43:21.603323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.600 [2024-05-14 11:43:21.603352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.600 [2024-05-14 11:43:21.603396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.600 [2024-05-14 11:43:21.603428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.600 [2024-05-14 11:43:21.603483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.600 [2024-05-14 11:43:21.603502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.600 #43 NEW cov: 12134 ft: 15038 corp: 27/1751b lim: 105 exec/s: 43 rss: 73Mb L: 69/93 MS: 1 InsertRepeatedBytes- 00:06:54.600 [2024-05-14 11:43:21.643422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.600 [2024-05-14 11:43:21.643451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.600 [2024-05-14 11:43:21.643507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.600 [2024-05-14 11:43:21.643524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.600 [2024-05-14 11:43:21.643578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744072300265471 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.600 [2024-05-14 11:43:21.643595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.600 #44 NEW cov: 12134 ft: 15052 corp: 28/1814b lim: 105 exec/s: 44 rss: 73Mb L: 63/93 MS: 1 CMP- DE: ">4"- 00:06:54.859 [2024-05-14 11:43:21.693710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.859 [2024-05-14 11:43:21.693739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.859 [2024-05-14 11:43:21.693784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.859 [2024-05-14 11:43:21.693800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.859 [2024-05-14 11:43:21.693856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.859 [2024-05-14 11:43:21.693871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.859 [2024-05-14 11:43:21.693924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744069421766143 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.859 [2024-05-14 11:43:21.693940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.859 #45 NEW cov: 12134 ft: 15069 corp: 29/1898b lim: 105 exec/s: 45 rss: 73Mb L: 84/93 MS: 1 PersAutoDict- DE: "\377\377~(X\000m\225"- 00:06:54.859 [2024-05-14 11:43:21.733569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12370169555311111083 len:43948 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.859 [2024-05-14 11:43:21.733597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.859 [2024-05-14 11:43:21.733643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12370169555311111083 len:43948 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.859 [2024-05-14 11:43:21.733659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.859 #46 NEW cov: 12134 ft: 15096 corp: 30/1955b lim: 105 exec/s: 46 rss: 73Mb L: 57/93 MS: 1 ChangeBit- 00:06:54.859 [2024-05-14 11:43:21.773676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.859 [2024-05-14 11:43:21.773707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.859 [2024-05-14 11:43:21.773767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446743712932298751 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.859 [2024-05-14 11:43:21.773784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.859 #47 NEW cov: 12134 ft: 15097 corp: 31/2002b lim: 105 exec/s: 47 rss: 73Mb L: 47/93 MS: 1 EraseBytes- 00:06:54.859 [2024-05-14 11:43:21.813779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.859 [2024-05-14 11:43:21.813805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.859 [2024-05-14 11:43:21.813852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.859 [2024-05-14 11:43:21.813868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.860 #48 NEW cov: 12134 ft: 15179 corp: 32/2064b lim: 105 exec/s: 48 rss: 73Mb L: 62/93 MS: 1 ChangeByte- 00:06:54.860 [2024-05-14 11:43:21.854241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.860 [2024-05-14 11:43:21.854270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.860 [2024-05-14 11:43:21.854324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.860 [2024-05-14 11:43:21.854339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.860 [2024-05-14 11:43:21.854392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.860 [2024-05-14 11:43:21.854407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.860 [2024-05-14 11:43:21.854461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:9727775195120271359 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.860 [2024-05-14 11:43:21.854476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.860 [2024-05-14 11:43:21.854527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.860 [2024-05-14 11:43:21.854542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:54.860 #49 NEW cov: 12134 ft: 15237 corp: 33/2169b lim: 105 exec/s: 49 rss: 74Mb L: 105/105 MS: 1 CrossOver- 00:06:54.860 [2024-05-14 11:43:21.903911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.860 [2024-05-14 11:43:21.903939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.860 #50 NEW cov: 12134 ft: 15243 corp: 34/2206b lim: 105 exec/s: 50 rss: 74Mb L: 37/105 MS: 1 EraseBytes- 00:06:55.118 [2024-05-14 11:43:21.954164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.118 [2024-05-14 11:43:21.954192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.118 [2024-05-14 11:43:21.954237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.118 [2024-05-14 11:43:21.954255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.118 #51 NEW cov: 12134 ft: 15252 corp: 35/2266b lim: 105 exec/s: 51 rss: 74Mb L: 60/105 MS: 1 ChangeByte- 00:06:55.118 [2024-05-14 11:43:21.994303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.118 [2024-05-14 11:43:21.994330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.118 [2024-05-14 11:43:21.994393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65427 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.118 [2024-05-14 11:43:21.994409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.118 #52 NEW cov: 12134 ft: 15262 corp: 36/2328b lim: 105 exec/s: 52 rss: 74Mb L: 62/105 MS: 1 ChangeByte- 00:06:55.118 [2024-05-14 11:43:22.034513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.118 [2024-05-14 11:43:22.034540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.118 [2024-05-14 11:43:22.034590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6341188763318438731 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.118 [2024-05-14 11:43:22.034604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.118 [2024-05-14 11:43:22.034658] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.118 [2024-05-14 11:43:22.034674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.119 #53 NEW cov: 12134 ft: 15267 corp: 37/2407b lim: 105 exec/s: 53 rss: 74Mb L: 79/105 MS: 1 InsertRepeatedBytes- 00:06:55.119 [2024-05-14 11:43:22.074661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.119 [2024-05-14 11:43:22.074690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.119 [2024-05-14 11:43:22.074736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.119 [2024-05-14 11:43:22.074754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.119 [2024-05-14 11:43:22.074809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744072300265471 len:65474 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.119 [2024-05-14 11:43:22.074825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.119 #54 NEW cov: 12134 ft: 15280 corp: 38/2470b lim: 105 exec/s: 54 rss: 74Mb L: 63/105 MS: 1 CopyPart- 00:06:55.119 [2024-05-14 11:43:22.114759] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.119 [2024-05-14 11:43:22.114786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.119 [2024-05-14 11:43:22.114843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.119 [2024-05-14 11:43:22.114859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.119 [2024-05-14 11:43:22.114917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.119 [2024-05-14 11:43:22.114933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.119 #55 NEW cov: 12134 ft: 15319 corp: 39/2539b lim: 105 exec/s: 55 rss: 74Mb L: 69/105 MS: 1 ShuffleBytes- 00:06:55.119 [2024-05-14 11:43:22.164866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.119 [2024-05-14 11:43:22.164893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.119 [2024-05-14 11:43:22.164955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.119 [2024-05-14 11:43:22.164971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.119 [2024-05-14 11:43:22.165026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:4107282858747101183 len:65452 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.119 [2024-05-14 11:43:22.165042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.119 #56 NEW cov: 12134 ft: 15339 corp: 40/2602b lim: 105 exec/s: 56 rss: 74Mb L: 63/105 MS: 1 InsertByte- 00:06:55.119 [2024-05-14 11:43:22.205143] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.119 [2024-05-14 11:43:22.205171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.119 [2024-05-14 11:43:22.205217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.119 [2024-05-14 11:43:22.205233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.119 [2024-05-14 11:43:22.205286] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.119 [2024-05-14 11:43:22.205303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.119 [2024-05-14 11:43:22.205354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:55.119 [2024-05-14 11:43:22.205369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.386 #57 NEW cov: 12134 ft: 15351 corp: 41/2687b lim: 105 exec/s: 28 rss: 74Mb L: 85/105 MS: 1 InsertByte- 00:06:55.386 #57 DONE cov: 12134 ft: 15351 corp: 41/2687b lim: 105 exec/s: 28 rss: 74Mb 00:06:55.386 ###### Recommended dictionary. ###### 00:06:55.386 "\000\003" # Uses: 1 00:06:55.387 "\377\377~(X\000m\225" # Uses: 2 00:06:55.387 ">4" # Uses: 0 00:06:55.387 ###### End of recommended dictionary. ###### 00:06:55.387 Done 57 runs in 2 second(s) 00:06:55.387 [2024-05-14 11:43:22.226720] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4417 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:55.387 11:43:22 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:06:55.387 [2024-05-14 11:43:22.394103] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:55.387 [2024-05-14 11:43:22.394193] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640640 ] 00:06:55.387 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.651 [2024-05-14 11:43:22.569645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.651 [2024-05-14 11:43:22.635905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.651 [2024-05-14 11:43:22.695001] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.651 [2024-05-14 11:43:22.710953] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:55.651 [2024-05-14 11:43:22.711351] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:06:55.651 INFO: Running with entropic power schedule (0xFF, 100). 00:06:55.651 INFO: Seed: 3322721415 00:06:55.909 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:55.909 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:55.909 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:55.909 INFO: A corpus is not provided, starting from an empty corpus 00:06:55.909 #2 INITED exec/s: 0 rss: 63Mb 00:06:55.909 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:55.909 This may also happen if the target rejected all inputs we tried so far 00:06:55.909 [2024-05-14 11:43:22.756113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.909 [2024-05-14 11:43:22.756146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.909 [2024-05-14 11:43:22.756179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.909 [2024-05-14 11:43:22.756197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.909 [2024-05-14 11:43:22.756232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.909 [2024-05-14 11:43:22.756248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.909 [2024-05-14 11:43:22.756277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:55.909 [2024-05-14 11:43:22.756293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.167 NEW_FUNC[1/687]: 0x49bb60 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:06:56.167 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:56.167 #8 NEW cov: 11911 ft: 11912 corp: 2/107b lim: 120 exec/s: 0 rss: 70Mb L: 106/106 MS: 1 InsertRepeatedBytes- 00:06:56.167 [2024-05-14 11:43:23.096813] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.167 [2024-05-14 11:43:23.096852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.167 [2024-05-14 11:43:23.096887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.167 [2024-05-14 11:43:23.096905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.167 #10 NEW cov: 12041 ft: 12895 corp: 3/161b lim: 120 exec/s: 0 rss: 70Mb L: 54/106 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:56.167 [2024-05-14 11:43:23.156864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.167 [2024-05-14 11:43:23.156895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.167 [2024-05-14 11:43:23.156929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.167 [2024-05-14 11:43:23.156946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.167 #11 NEW cov: 12047 ft: 13060 corp: 4/223b lim: 120 exec/s: 0 rss: 70Mb L: 62/106 MS: 1 CMP- DE: "\000\205=c\3644\270\016"- 00:06:56.167 [2024-05-14 11:43:23.227052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.167 [2024-05-14 11:43:23.227083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.167 [2024-05-14 11:43:23.227116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.167 [2024-05-14 11:43:23.227134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.424 #12 NEW cov: 12132 ft: 13387 corp: 5/278b lim: 120 exec/s: 0 rss: 70Mb L: 55/106 MS: 1 InsertByte- 00:06:56.424 [2024-05-14 11:43:23.287166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.424 [2024-05-14 11:43:23.287195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.424 [2024-05-14 11:43:23.287243] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.424 [2024-05-14 11:43:23.287265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.424 #13 NEW cov: 12132 ft: 13576 corp: 6/340b lim: 120 exec/s: 0 rss: 70Mb L: 62/106 MS: 1 ChangeBinInt- 00:06:56.424 [2024-05-14 11:43:23.357343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.424 [2024-05-14 11:43:23.357373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.424 [2024-05-14 11:43:23.357413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.424 [2024-05-14 11:43:23.357447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.424 #14 NEW cov: 12132 ft: 13732 corp: 7/402b lim: 120 exec/s: 0 rss: 70Mb L: 62/106 MS: 1 ChangeBit- 00:06:56.424 [2024-05-14 11:43:23.427575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.424 [2024-05-14 11:43:23.427605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.424 [2024-05-14 11:43:23.427640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.424 [2024-05-14 11:43:23.427657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.424 #15 NEW cov: 12132 ft: 13791 corp: 8/456b lim: 120 exec/s: 0 rss: 70Mb L: 54/106 MS: 1 ChangeBit- 00:06:56.424 [2024-05-14 11:43:23.477842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.424 [2024-05-14 11:43:23.477871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.424 [2024-05-14 11:43:23.477904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.424 [2024-05-14 11:43:23.477922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.424 [2024-05-14 11:43:23.477951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.424 [2024-05-14 11:43:23.477968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.424 [2024-05-14 11:43:23.477996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18386226953716760575 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.424 [2024-05-14 11:43:23.478012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.782 #16 NEW cov: 12132 ft: 13892 corp: 9/562b lim: 120 exec/s: 0 rss: 71Mb L: 106/106 MS: 1 ChangeByte- 00:06:56.782 [2024-05-14 11:43:23.547844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.547874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.782 [2024-05-14 11:43:23.547908] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.547925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.782 #17 NEW cov: 12132 ft: 13976 corp: 10/624b lim: 120 exec/s: 0 rss: 71Mb L: 62/106 MS: 1 ChangeBit- 00:06:56.782 [2024-05-14 11:43:23.618965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.618998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.782 [2024-05-14 11:43:23.619042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.619059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.782 [2024-05-14 11:43:23.619115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.619132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.782 [2024-05-14 11:43:23.619187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.619204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:56.782 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:56.782 #18 NEW cov: 12149 ft: 14113 corp: 11/730b lim: 120 exec/s: 0 rss: 71Mb L: 106/106 MS: 1 ShuffleBytes- 00:06:56.782 [2024-05-14 11:43:23.658710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.658738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.782 [2024-05-14 11:43:23.658786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65321 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.658802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.782 #19 NEW cov: 12149 ft: 14212 corp: 12/784b lim: 120 exec/s: 0 rss: 71Mb L: 54/106 MS: 1 ChangeByte- 00:06:56.782 [2024-05-14 11:43:23.698816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.698843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.782 [2024-05-14 11:43:23.698889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.698905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.782 #20 NEW cov: 12149 ft: 14216 corp: 13/838b lim: 120 exec/s: 0 rss: 71Mb L: 54/106 MS: 1 ChangeBit- 00:06:56.782 [2024-05-14 11:43:23.738935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.738962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.782 [2024-05-14 11:43:23.739011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65321 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.739026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.782 #21 NEW cov: 12149 ft: 14240 corp: 14/892b lim: 120 exec/s: 21 rss: 71Mb L: 54/106 MS: 1 ShuffleBytes- 00:06:56.782 [2024-05-14 11:43:23.789060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.789087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.782 [2024-05-14 11:43:23.789119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.789135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.782 #22 NEW cov: 12149 ft: 14301 corp: 15/946b lim: 120 exec/s: 22 rss: 71Mb L: 54/106 MS: 1 PersAutoDict- DE: "\000\205=c\3644\270\016"- 00:06:56.782 [2024-05-14 11:43:23.829157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.829184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.782 [2024-05-14 11:43:23.829215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65321 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.829231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.782 #28 NEW cov: 12149 ft: 14336 corp: 16/1000b lim: 120 exec/s: 28 rss: 71Mb L: 54/106 MS: 1 CrossOver- 00:06:56.782 [2024-05-14 11:43:23.869609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.869636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.782 [2024-05-14 11:43:23.869685] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.869700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.782 [2024-05-14 11:43:23.869750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446462603027808255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.869765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.782 [2024-05-14 11:43:23.869819] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:56.782 [2024-05-14 11:43:23.869833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.040 #29 NEW cov: 12149 ft: 14365 corp: 17/1114b lim: 120 exec/s: 29 rss: 71Mb L: 114/114 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:57.041 [2024-05-14 11:43:23.919298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.041 [2024-05-14 11:43:23.919326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.041 #30 NEW cov: 12149 ft: 15188 corp: 18/1160b lim: 120 exec/s: 30 rss: 71Mb L: 46/114 MS: 1 CrossOver- 00:06:57.041 [2024-05-14 11:43:23.959519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.041 [2024-05-14 11:43:23.959547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.041 [2024-05-14 11:43:23.959595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446742978492891135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.041 [2024-05-14 11:43:23.959617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.041 #31 NEW cov: 12149 ft: 15192 corp: 19/1222b lim: 120 exec/s: 31 rss: 71Mb L: 62/114 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:06:57.041 [2024-05-14 11:43:24.009654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.041 [2024-05-14 11:43:24.009681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.041 [2024-05-14 11:43:24.009728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.041 [2024-05-14 11:43:24.009744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.041 #32 NEW cov: 12149 ft: 15202 corp: 20/1284b lim: 120 exec/s: 32 rss: 71Mb L: 62/114 MS: 1 CrossOver- 00:06:57.041 [2024-05-14 11:43:24.049787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.041 [2024-05-14 11:43:24.049814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.041 [2024-05-14 11:43:24.049863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.041 [2024-05-14 11:43:24.049880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.041 #33 NEW cov: 12149 ft: 15281 corp: 21/1347b lim: 120 exec/s: 33 rss: 71Mb L: 63/114 MS: 1 InsertByte- 00:06:57.041 [2024-05-14 11:43:24.100192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.041 [2024-05-14 11:43:24.100219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.041 [2024-05-14 11:43:24.100264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65321 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.041 [2024-05-14 11:43:24.100279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.041 [2024-05-14 11:43:24.100329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18409307901807034367 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.041 [2024-05-14 11:43:24.100343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.041 [2024-05-14 11:43:24.100399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.041 [2024-05-14 11:43:24.100432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.041 #34 NEW cov: 12149 ft: 15287 corp: 22/1453b lim: 120 exec/s: 34 rss: 71Mb L: 106/114 MS: 1 CrossOver- 00:06:57.299 [2024-05-14 11:43:24.139868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.299 [2024-05-14 11:43:24.139895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.299 #35 NEW cov: 12149 ft: 15306 corp: 23/1487b lim: 120 exec/s: 35 rss: 71Mb L: 34/114 MS: 1 EraseBytes- 00:06:57.299 [2024-05-14 11:43:24.180140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.299 [2024-05-14 11:43:24.180171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.299 [2024-05-14 11:43:24.180231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.299 [2024-05-14 11:43:24.180247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.299 #36 NEW cov: 12149 ft: 15317 corp: 24/1541b lim: 120 exec/s: 36 rss: 71Mb L: 54/114 MS: 1 ChangeBit- 00:06:57.299 [2024-05-14 11:43:24.230255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744039349813247 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.299 [2024-05-14 11:43:24.230282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.299 [2024-05-14 11:43:24.230330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.299 [2024-05-14 11:43:24.230346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.299 #37 NEW cov: 12149 ft: 15325 corp: 25/1603b lim: 120 exec/s: 37 rss: 71Mb L: 62/114 MS: 1 ChangeBinInt- 00:06:57.299 [2024-05-14 11:43:24.280691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.299 [2024-05-14 11:43:24.280717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.299 [2024-05-14 11:43:24.280778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65321 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.299 [2024-05-14 11:43:24.280795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.299 [2024-05-14 11:43:24.280844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18409307901807034367 len:24673 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.299 [2024-05-14 11:43:24.280860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.299 [2024-05-14 11:43:24.280911] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.299 [2024-05-14 11:43:24.280926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.299 #38 NEW cov: 12149 ft: 15334 corp: 26/1715b lim: 120 exec/s: 38 rss: 72Mb L: 112/114 MS: 1 InsertRepeatedBytes- 00:06:57.299 [2024-05-14 11:43:24.330543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.299 [2024-05-14 11:43:24.330569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.299 [2024-05-14 11:43:24.330616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.299 [2024-05-14 11:43:24.330632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.299 #39 NEW cov: 12149 ft: 15345 corp: 27/1777b lim: 120 exec/s: 39 rss: 72Mb L: 62/114 MS: 1 ChangeBit- 00:06:57.299 [2024-05-14 11:43:24.370666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744039349813247 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.299 [2024-05-14 11:43:24.370692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.299 [2024-05-14 11:43:24.370752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.299 [2024-05-14 11:43:24.370769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.558 #40 NEW cov: 12149 ft: 15353 corp: 28/1839b lim: 120 exec/s: 40 rss: 72Mb L: 62/114 MS: 1 ChangeASCIIInt- 00:06:57.558 [2024-05-14 11:43:24.420782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.558 [2024-05-14 11:43:24.420810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.558 [2024-05-14 11:43:24.420847] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073675997183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.558 [2024-05-14 11:43:24.420862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.558 #41 NEW cov: 12149 ft: 15391 corp: 29/1893b lim: 120 exec/s: 41 rss: 72Mb L: 54/114 MS: 1 ChangeBinInt- 00:06:57.558 [2024-05-14 11:43:24.460946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.558 [2024-05-14 11:43:24.460973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.558 [2024-05-14 11:43:24.461006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65321 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.558 [2024-05-14 11:43:24.461038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.558 #47 NEW cov: 12149 ft: 15403 corp: 30/1947b lim: 120 exec/s: 47 rss: 72Mb L: 54/114 MS: 1 CopyPart- 00:06:57.558 [2024-05-14 11:43:24.501336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.558 [2024-05-14 11:43:24.501364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.558 [2024-05-14 11:43:24.501415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.558 [2024-05-14 11:43:24.501431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.558 [2024-05-14 11:43:24.501482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446462603027808255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.558 [2024-05-14 11:43:24.501498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.558 [2024-05-14 11:43:24.501548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446181123756130303 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.558 [2024-05-14 11:43:24.501564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.558 #48 NEW cov: 12149 ft: 15453 corp: 31/2061b lim: 120 exec/s: 48 rss: 72Mb L: 114/114 MS: 1 ChangeBit- 00:06:57.558 [2024-05-14 11:43:24.541156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.559 [2024-05-14 11:43:24.541183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.559 [2024-05-14 11:43:24.541224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.559 [2024-05-14 11:43:24.541243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.559 #49 NEW cov: 12149 ft: 15471 corp: 32/2124b lim: 120 exec/s: 49 rss: 72Mb L: 63/114 MS: 1 CMP- DE: "><"- 00:06:57.559 [2024-05-14 11:43:24.591278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.559 [2024-05-14 11:43:24.591305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.559 [2024-05-14 11:43:24.591355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.559 [2024-05-14 11:43:24.591371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.559 #55 NEW cov: 12149 ft: 15478 corp: 33/2187b lim: 120 exec/s: 55 rss: 72Mb L: 63/114 MS: 1 ChangeByte- 00:06:57.559 [2024-05-14 11:43:24.631392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073708109823 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.559 [2024-05-14 11:43:24.631418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.559 [2024-05-14 11:43:24.631481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:16128 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.559 [2024-05-14 11:43:24.631497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.818 #56 NEW cov: 12156 ft: 15507 corp: 34/2250b lim: 120 exec/s: 56 rss: 73Mb L: 63/114 MS: 1 InsertByte- 00:06:57.818 [2024-05-14 11:43:24.671511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.818 [2024-05-14 11:43:24.671537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.818 [2024-05-14 11:43:24.671569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65321 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.818 [2024-05-14 11:43:24.671584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.818 #57 NEW cov: 12156 ft: 15519 corp: 35/2304b lim: 120 exec/s: 57 rss: 73Mb L: 54/114 MS: 1 ChangeBit- 00:06:57.818 [2024-05-14 11:43:24.721678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.818 [2024-05-14 11:43:24.721706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.818 [2024-05-14 11:43:24.721745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.818 [2024-05-14 11:43:24.721759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.818 #58 NEW cov: 12156 ft: 15533 corp: 36/2367b lim: 120 exec/s: 58 rss: 73Mb L: 63/114 MS: 1 ChangeBinInt- 00:06:57.818 [2024-05-14 11:43:24.762220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073708109823 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.818 [2024-05-14 11:43:24.762248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.818 [2024-05-14 11:43:24.762298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:16128 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.818 [2024-05-14 11:43:24.762314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.818 [2024-05-14 11:43:24.762368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.818 [2024-05-14 11:43:24.762388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.818 [2024-05-14 11:43:24.762455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:9600939884541902592 len:3840 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.818 [2024-05-14 11:43:24.762470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.818 [2024-05-14 11:43:24.762521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:3798802776301855732 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:57.818 [2024-05-14 11:43:24.762537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:57.818 #59 NEW cov: 12156 ft: 15588 corp: 37/2487b lim: 120 exec/s: 29 rss: 73Mb L: 120/120 MS: 1 CopyPart- 00:06:57.818 #59 DONE cov: 12156 ft: 15588 corp: 37/2487b lim: 120 exec/s: 29 rss: 73Mb 00:06:57.818 ###### Recommended dictionary. ###### 00:06:57.818 "\000\205=c\3644\270\016" # Uses: 2 00:06:57.818 "\000\000\000\000\000\000\000\000" # Uses: 2 00:06:57.818 "><" # Uses: 0 00:06:57.818 ###### End of recommended dictionary. ###### 00:06:57.818 Done 59 runs in 2 second(s) 00:06:57.819 [2024-05-14 11:43:24.791353] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:58.077 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:06:58.077 11:43:24 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:58.077 11:43:24 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:58.077 11:43:24 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:06:58.077 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:06:58.077 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:58.077 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:58.078 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:58.078 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:06:58.078 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:58.078 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:58.078 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:06:58.078 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4418 00:06:58.078 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:58.078 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:06:58.078 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:58.078 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:58.078 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:58.078 11:43:24 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:06:58.078 [2024-05-14 11:43:24.956295] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:06:58.078 [2024-05-14 11:43:24.956370] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641179 ] 00:06:58.078 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.078 [2024-05-14 11:43:25.133735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.337 [2024-05-14 11:43:25.199717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.337 [2024-05-14 11:43:25.258425] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.337 [2024-05-14 11:43:25.274388] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:58.337 [2024-05-14 11:43:25.274786] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:06:58.337 INFO: Running with entropic power schedule (0xFF, 100). 00:06:58.337 INFO: Seed: 1591733824 00:06:58.337 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:06:58.337 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:06:58.337 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:58.337 INFO: A corpus is not provided, starting from an empty corpus 00:06:58.337 #2 INITED exec/s: 0 rss: 64Mb 00:06:58.337 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:58.337 This may also happen if the target rejected all inputs we tried so far 00:06:58.337 [2024-05-14 11:43:25.319388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.337 [2024-05-14 11:43:25.319434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.337 [2024-05-14 11:43:25.319467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.337 [2024-05-14 11:43:25.319482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.337 [2024-05-14 11:43:25.319511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.337 [2024-05-14 11:43:25.319526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.596 NEW_FUNC[1/685]: 0x49f450 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:06:58.596 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:58.596 #16 NEW cov: 11854 ft: 11852 corp: 2/80b lim: 100 exec/s: 0 rss: 70Mb L: 79/79 MS: 4 ChangeByte-ChangeBit-CrossOver-InsertRepeatedBytes- 00:06:58.596 [2024-05-14 11:43:25.650256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.596 [2024-05-14 11:43:25.650292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.596 [2024-05-14 11:43:25.650324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.596 [2024-05-14 11:43:25.650340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.596 [2024-05-14 11:43:25.650368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.596 [2024-05-14 11:43:25.650392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.596 [2024-05-14 11:43:25.650421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.596 [2024-05-14 11:43:25.650435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.854 #17 NEW cov: 11984 ft: 12586 corp: 3/160b lim: 100 exec/s: 0 rss: 70Mb L: 80/80 MS: 1 InsertByte- 00:06:58.854 [2024-05-14 11:43:25.720350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.854 [2024-05-14 11:43:25.720390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.854 [2024-05-14 11:43:25.720423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.854 [2024-05-14 11:43:25.720439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.854 [2024-05-14 11:43:25.720469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.854 [2024-05-14 11:43:25.720484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.854 [2024-05-14 11:43:25.720512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.854 [2024-05-14 11:43:25.720527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.854 #23 NEW cov: 11990 ft: 12891 corp: 4/240b lim: 100 exec/s: 0 rss: 70Mb L: 80/80 MS: 1 CopyPart- 00:06:58.854 [2024-05-14 11:43:25.790489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.854 [2024-05-14 11:43:25.790517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.854 [2024-05-14 11:43:25.790548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.854 [2024-05-14 11:43:25.790564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.854 [2024-05-14 11:43:25.790593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.854 [2024-05-14 11:43:25.790607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.854 [2024-05-14 11:43:25.790635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.854 [2024-05-14 11:43:25.790649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.855 #24 NEW cov: 12075 ft: 13242 corp: 5/320b lim: 100 exec/s: 0 rss: 71Mb L: 80/80 MS: 1 ChangeBinInt- 00:06:58.855 [2024-05-14 11:43:25.860621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.855 [2024-05-14 11:43:25.860651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.855 [2024-05-14 11:43:25.860683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.855 [2024-05-14 11:43:25.860698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.855 [2024-05-14 11:43:25.860727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.855 [2024-05-14 11:43:25.860741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.855 #25 NEW cov: 12075 ft: 13305 corp: 6/399b lim: 100 exec/s: 0 rss: 71Mb L: 79/80 MS: 1 ChangeByte- 00:06:58.855 [2024-05-14 11:43:25.910784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:58.855 [2024-05-14 11:43:25.910812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.855 [2024-05-14 11:43:25.910856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:58.855 [2024-05-14 11:43:25.910873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.855 [2024-05-14 11:43:25.910902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:58.855 [2024-05-14 11:43:25.910920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.855 [2024-05-14 11:43:25.910948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:58.855 [2024-05-14 11:43:25.910962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.113 #26 NEW cov: 12075 ft: 13403 corp: 7/479b lim: 100 exec/s: 0 rss: 71Mb L: 80/80 MS: 1 ChangeBinInt- 00:06:59.113 [2024-05-14 11:43:25.981024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.113 [2024-05-14 11:43:25.981054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.113 [2024-05-14 11:43:25.981101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.113 [2024-05-14 11:43:25.981117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.113 [2024-05-14 11:43:25.981146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.113 [2024-05-14 11:43:25.981162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.113 [2024-05-14 11:43:25.981191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.113 [2024-05-14 11:43:25.981206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.113 #27 NEW cov: 12075 ft: 13509 corp: 8/559b lim: 100 exec/s: 0 rss: 71Mb L: 80/80 MS: 1 ChangeByte- 00:06:59.113 [2024-05-14 11:43:26.051191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.113 [2024-05-14 11:43:26.051221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.113 [2024-05-14 11:43:26.051253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.113 [2024-05-14 11:43:26.051269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.113 [2024-05-14 11:43:26.051298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.113 [2024-05-14 11:43:26.051313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.113 [2024-05-14 11:43:26.051341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.113 [2024-05-14 11:43:26.051355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.113 #28 NEW cov: 12075 ft: 13590 corp: 9/655b lim: 100 exec/s: 0 rss: 71Mb L: 96/96 MS: 1 CopyPart- 00:06:59.114 [2024-05-14 11:43:26.121309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.114 [2024-05-14 11:43:26.121336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.114 [2024-05-14 11:43:26.121389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.114 [2024-05-14 11:43:26.121405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.114 [2024-05-14 11:43:26.121434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.114 [2024-05-14 11:43:26.121449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.114 [2024-05-14 11:43:26.121476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.114 [2024-05-14 11:43:26.121495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.114 #29 NEW cov: 12075 ft: 13641 corp: 10/741b lim: 100 exec/s: 0 rss: 71Mb L: 86/96 MS: 1 CopyPart- 00:06:59.114 [2024-05-14 11:43:26.171450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.114 [2024-05-14 11:43:26.171477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.114 [2024-05-14 11:43:26.171523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.114 [2024-05-14 11:43:26.171538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.114 [2024-05-14 11:43:26.171567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.114 [2024-05-14 11:43:26.171581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.114 [2024-05-14 11:43:26.171608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.114 [2024-05-14 11:43:26.171622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.372 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:06:59.372 #30 NEW cov: 12098 ft: 13708 corp: 11/837b lim: 100 exec/s: 0 rss: 71Mb L: 96/96 MS: 1 ShuffleBytes- 00:06:59.372 [2024-05-14 11:43:26.241635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.372 [2024-05-14 11:43:26.241664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.372 [2024-05-14 11:43:26.241710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.372 [2024-05-14 11:43:26.241726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.372 [2024-05-14 11:43:26.241754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.372 [2024-05-14 11:43:26.241769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.372 [2024-05-14 11:43:26.241796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.372 [2024-05-14 11:43:26.241810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.372 #31 NEW cov: 12098 ft: 13786 corp: 12/923b lim: 100 exec/s: 0 rss: 72Mb L: 86/96 MS: 1 ChangeByte- 00:06:59.372 [2024-05-14 11:43:26.311863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.372 [2024-05-14 11:43:26.311891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.372 [2024-05-14 11:43:26.311936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.372 [2024-05-14 11:43:26.311952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.372 [2024-05-14 11:43:26.311981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.372 [2024-05-14 11:43:26.311996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.372 [2024-05-14 11:43:26.312023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.372 [2024-05-14 11:43:26.312037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.372 #32 NEW cov: 12098 ft: 13838 corp: 13/1021b lim: 100 exec/s: 32 rss: 72Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:06:59.372 [2024-05-14 11:43:26.381872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.372 [2024-05-14 11:43:26.381900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.372 #34 NEW cov: 12098 ft: 14250 corp: 14/1058b lim: 100 exec/s: 34 rss: 72Mb L: 37/98 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:59.372 [2024-05-14 11:43:26.442175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.372 [2024-05-14 11:43:26.442203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.372 [2024-05-14 11:43:26.442232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.372 [2024-05-14 11:43:26.442247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.372 [2024-05-14 11:43:26.442275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.372 [2024-05-14 11:43:26.442288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.372 [2024-05-14 11:43:26.442314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.372 [2024-05-14 11:43:26.442343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.632 #35 NEW cov: 12098 ft: 14307 corp: 15/1149b lim: 100 exec/s: 35 rss: 72Mb L: 91/98 MS: 1 InsertRepeatedBytes- 00:06:59.632 [2024-05-14 11:43:26.492281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.632 [2024-05-14 11:43:26.492308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.632 [2024-05-14 11:43:26.492354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.632 [2024-05-14 11:43:26.492369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.632 [2024-05-14 11:43:26.492405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.632 [2024-05-14 11:43:26.492420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.632 [2024-05-14 11:43:26.492448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.632 [2024-05-14 11:43:26.492462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.632 #36 NEW cov: 12098 ft: 14328 corp: 16/1229b lim: 100 exec/s: 36 rss: 72Mb L: 80/98 MS: 1 CopyPart- 00:06:59.632 [2024-05-14 11:43:26.542489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.632 [2024-05-14 11:43:26.542526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.632 [2024-05-14 11:43:26.542572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.632 [2024-05-14 11:43:26.542587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.632 [2024-05-14 11:43:26.542615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.632 [2024-05-14 11:43:26.542630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.632 [2024-05-14 11:43:26.542657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.632 [2024-05-14 11:43:26.542671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.632 #37 NEW cov: 12098 ft: 14388 corp: 17/1309b lim: 100 exec/s: 37 rss: 72Mb L: 80/98 MS: 1 CopyPart- 00:06:59.632 [2024-05-14 11:43:26.592573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.632 [2024-05-14 11:43:26.592600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.632 [2024-05-14 11:43:26.592631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.632 [2024-05-14 11:43:26.592647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.632 [2024-05-14 11:43:26.592676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.632 [2024-05-14 11:43:26.592690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.632 [2024-05-14 11:43:26.592717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.632 [2024-05-14 11:43:26.592732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.632 #38 NEW cov: 12098 ft: 14396 corp: 18/1401b lim: 100 exec/s: 38 rss: 72Mb L: 92/98 MS: 1 InsertRepeatedBytes- 00:06:59.632 [2024-05-14 11:43:26.642711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.632 [2024-05-14 11:43:26.642739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.632 [2024-05-14 11:43:26.642770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.632 [2024-05-14 11:43:26.642786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.632 [2024-05-14 11:43:26.642814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.632 [2024-05-14 11:43:26.642829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.632 [2024-05-14 11:43:26.642856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.632 [2024-05-14 11:43:26.642870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.632 #39 NEW cov: 12098 ft: 14438 corp: 19/1484b lim: 100 exec/s: 39 rss: 72Mb L: 83/98 MS: 1 CMP- DE: "\000\000\000\000"- 00:06:59.632 [2024-05-14 11:43:26.692877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.632 [2024-05-14 11:43:26.692905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.632 [2024-05-14 11:43:26.692935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.632 [2024-05-14 11:43:26.692952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.632 [2024-05-14 11:43:26.692980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.632 [2024-05-14 11:43:26.692994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.632 [2024-05-14 11:43:26.693022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.632 [2024-05-14 11:43:26.693052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.892 #40 NEW cov: 12098 ft: 14457 corp: 20/1582b lim: 100 exec/s: 40 rss: 72Mb L: 98/98 MS: 1 CrossOver- 00:06:59.892 [2024-05-14 11:43:26.762980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.892 [2024-05-14 11:43:26.763007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.892 [2024-05-14 11:43:26.763058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.892 [2024-05-14 11:43:26.763074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.892 [2024-05-14 11:43:26.763103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.892 [2024-05-14 11:43:26.763117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.892 #41 NEW cov: 12098 ft: 14482 corp: 21/1645b lim: 100 exec/s: 41 rss: 72Mb L: 63/98 MS: 1 CrossOver- 00:06:59.892 [2024-05-14 11:43:26.833123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.892 [2024-05-14 11:43:26.833150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.892 [2024-05-14 11:43:26.833197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.892 [2024-05-14 11:43:26.833212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.892 #46 NEW cov: 12098 ft: 14745 corp: 22/1703b lim: 100 exec/s: 46 rss: 72Mb L: 58/98 MS: 5 ChangeBit-ChangeByte-ChangeBinInt-PersAutoDict-CrossOver- DE: "\000\000\000\000"- 00:06:59.892 [2024-05-14 11:43:26.883329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.892 [2024-05-14 11:43:26.883357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.892 [2024-05-14 11:43:26.883396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.892 [2024-05-14 11:43:26.883413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.892 [2024-05-14 11:43:26.883442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.892 [2024-05-14 11:43:26.883457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.892 [2024-05-14 11:43:26.883484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:59.892 [2024-05-14 11:43:26.883498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.892 #47 NEW cov: 12098 ft: 14748 corp: 23/1783b lim: 100 exec/s: 47 rss: 72Mb L: 80/98 MS: 1 CopyPart- 00:06:59.892 [2024-05-14 11:43:26.933461] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:59.892 [2024-05-14 11:43:26.933488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.892 [2024-05-14 11:43:26.933519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:59.892 [2024-05-14 11:43:26.933535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.892 [2024-05-14 11:43:26.933564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:59.892 [2024-05-14 11:43:26.933578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.151 #48 NEW cov: 12098 ft: 14759 corp: 24/1848b lim: 100 exec/s: 48 rss: 72Mb L: 65/98 MS: 1 CopyPart- 00:07:00.151 [2024-05-14 11:43:27.003639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:00.151 [2024-05-14 11:43:27.003667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.151 [2024-05-14 11:43:27.003712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:00.151 [2024-05-14 11:43:27.003735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.151 [2024-05-14 11:43:27.003764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:00.151 [2024-05-14 11:43:27.003778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.151 [2024-05-14 11:43:27.003805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:00.151 [2024-05-14 11:43:27.003820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.151 #49 NEW cov: 12098 ft: 14774 corp: 25/1928b lim: 100 exec/s: 49 rss: 72Mb L: 80/98 MS: 1 ChangeByte- 00:07:00.151 [2024-05-14 11:43:27.063761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:00.151 [2024-05-14 11:43:27.063789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.151 [2024-05-14 11:43:27.063822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:00.151 [2024-05-14 11:43:27.063838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.151 #55 NEW cov: 12098 ft: 14780 corp: 26/1979b lim: 100 exec/s: 55 rss: 72Mb L: 51/98 MS: 1 EraseBytes- 00:07:00.151 [2024-05-14 11:43:27.134035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:00.151 [2024-05-14 11:43:27.134063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.151 [2024-05-14 11:43:27.134093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:00.151 [2024-05-14 11:43:27.134109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.151 [2024-05-14 11:43:27.134138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:00.151 [2024-05-14 11:43:27.134152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.151 [2024-05-14 11:43:27.134180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:00.151 [2024-05-14 11:43:27.134193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.151 #56 NEW cov: 12098 ft: 14788 corp: 27/2062b lim: 100 exec/s: 56 rss: 72Mb L: 83/98 MS: 1 CMP- DE: "\000\000\000\000"- 00:07:00.151 [2024-05-14 11:43:27.184086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:00.151 [2024-05-14 11:43:27.184115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.151 [2024-05-14 11:43:27.184148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:00.151 [2024-05-14 11:43:27.184163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.151 [2024-05-14 11:43:27.184192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:00.151 [2024-05-14 11:43:27.184207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.151 #57 NEW cov: 12098 ft: 14819 corp: 28/2126b lim: 100 exec/s: 57 rss: 72Mb L: 64/98 MS: 1 InsertByte- 00:07:00.151 [2024-05-14 11:43:27.234288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:00.152 [2024-05-14 11:43:27.234318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.152 [2024-05-14 11:43:27.234355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:00.152 [2024-05-14 11:43:27.234370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.152 [2024-05-14 11:43:27.234409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:00.152 [2024-05-14 11:43:27.234424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.152 [2024-05-14 11:43:27.234453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:00.152 [2024-05-14 11:43:27.234467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.411 #58 NEW cov: 12098 ft: 14870 corp: 29/2224b lim: 100 exec/s: 58 rss: 72Mb L: 98/98 MS: 1 ChangeBit- 00:07:00.411 [2024-05-14 11:43:27.294486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:00.411 [2024-05-14 11:43:27.294515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.411 [2024-05-14 11:43:27.294547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:00.411 [2024-05-14 11:43:27.294563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.411 [2024-05-14 11:43:27.294592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:00.411 [2024-05-14 11:43:27.294607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.411 [2024-05-14 11:43:27.294635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:00.411 [2024-05-14 11:43:27.294650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.411 #59 NEW cov: 12098 ft: 14883 corp: 30/2312b lim: 100 exec/s: 29 rss: 72Mb L: 88/98 MS: 1 CrossOver- 00:07:00.411 #59 DONE cov: 12098 ft: 14883 corp: 30/2312b lim: 100 exec/s: 29 rss: 72Mb 00:07:00.411 ###### Recommended dictionary. ###### 00:07:00.411 "\000\000\000\000" # Uses: 1 00:07:00.411 ###### End of recommended dictionary. ###### 00:07:00.411 Done 59 runs in 2 second(s) 00:07:00.411 [2024-05-14 11:43:27.333451] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4419 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:00.411 11:43:27 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:07:00.411 [2024-05-14 11:43:27.499410] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:07:00.411 [2024-05-14 11:43:27.499474] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641465 ] 00:07:00.670 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.670 [2024-05-14 11:43:27.683508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.670 [2024-05-14 11:43:27.749657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.930 [2024-05-14 11:43:27.809013] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.930 [2024-05-14 11:43:27.824968] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:00.930 [2024-05-14 11:43:27.825399] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:00.930 INFO: Running with entropic power schedule (0xFF, 100). 00:07:00.930 INFO: Seed: 4141766745 00:07:00.930 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:07:00.930 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:07:00.930 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:00.930 INFO: A corpus is not provided, starting from an empty corpus 00:07:00.930 #2 INITED exec/s: 0 rss: 64Mb 00:07:00.930 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:00.930 This may also happen if the target rejected all inputs we tried so far 00:07:00.930 [2024-05-14 11:43:27.890422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17361641481138401520 len:61681 00:07:00.930 [2024-05-14 11:43:27.890452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.254 NEW_FUNC[1/685]: 0x4a2410 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:01.254 NEW_FUNC[2/685]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:01.254 #24 NEW cov: 11829 ft: 11829 corp: 2/16b lim: 50 exec/s: 0 rss: 70Mb L: 15/15 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:01.254 [2024-05-14 11:43:28.221413] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17310694510353772784 len:61681 00:07:01.254 [2024-05-14 11:43:28.221474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.254 #25 NEW cov: 11962 ft: 12490 corp: 3/32b lim: 50 exec/s: 0 rss: 70Mb L: 16/16 MS: 1 InsertByte- 00:07:01.254 [2024-05-14 11:43:28.271287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17505756669214257392 len:61681 00:07:01.255 [2024-05-14 11:43:28.271317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.255 #26 NEW cov: 11968 ft: 12867 corp: 4/47b lim: 50 exec/s: 0 rss: 70Mb L: 15/16 MS: 1 ChangeBit- 00:07:01.255 [2024-05-14 11:43:28.311425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17505721484842168560 len:61681 00:07:01.255 [2024-05-14 11:43:28.311454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.514 #27 NEW cov: 12053 ft: 13078 corp: 5/62b lim: 50 exec/s: 0 rss: 70Mb L: 15/16 MS: 1 ChangeBit- 00:07:01.514 [2024-05-14 11:43:28.361502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4463894960863768816 len:61681 00:07:01.514 [2024-05-14 11:43:28.361530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.514 #28 NEW cov: 12053 ft: 13135 corp: 6/78b lim: 50 exec/s: 0 rss: 71Mb L: 16/16 MS: 1 InsertByte- 00:07:01.514 [2024-05-14 11:43:28.401649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17361922956115112176 len:61681 00:07:01.514 [2024-05-14 11:43:28.401676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.514 #29 NEW cov: 12053 ft: 13171 corp: 7/93b lim: 50 exec/s: 0 rss: 71Mb L: 15/16 MS: 1 ChangeBit- 00:07:01.514 [2024-05-14 11:43:28.441770] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17505756669214257392 len:61665 00:07:01.514 [2024-05-14 11:43:28.441796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.514 #30 NEW cov: 12053 ft: 13219 corp: 8/108b lim: 50 exec/s: 0 rss: 71Mb L: 15/16 MS: 1 ChangeBit- 00:07:01.514 [2024-05-14 11:43:28.481883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17361641481391046409 len:61681 00:07:01.514 [2024-05-14 11:43:28.481910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.514 #31 NEW cov: 12053 ft: 13298 corp: 9/123b lim: 50 exec/s: 0 rss: 71Mb L: 15/16 MS: 1 CMP- DE: "\377\377\377\011"- 00:07:01.514 [2024-05-14 11:43:28.522159] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17361641481391046409 len:61681 00:07:01.514 [2024-05-14 11:43:28.522186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.514 [2024-05-14 11:43:28.522234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:716337258378035199 len:61681 00:07:01.514 [2024-05-14 11:43:28.522251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.514 #32 NEW cov: 12053 ft: 13642 corp: 10/145b lim: 50 exec/s: 0 rss: 71Mb L: 22/22 MS: 1 CopyPart- 00:07:01.514 [2024-05-14 11:43:28.572130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17311820410260615408 len:61681 00:07:01.514 [2024-05-14 11:43:28.572157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.514 #33 NEW cov: 12053 ft: 13691 corp: 11/161b lim: 50 exec/s: 0 rss: 71Mb L: 16/22 MS: 1 ChangeBit- 00:07:01.773 [2024-05-14 11:43:28.612227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17506038144190968048 len:61681 00:07:01.773 [2024-05-14 11:43:28.612254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.773 #34 NEW cov: 12053 ft: 13714 corp: 12/176b lim: 50 exec/s: 0 rss: 71Mb L: 15/22 MS: 1 ChangeBit- 00:07:01.773 [2024-05-14 11:43:28.642356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17505756669214257393 len:61681 00:07:01.773 [2024-05-14 11:43:28.642385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.773 #35 NEW cov: 12053 ft: 13779 corp: 13/191b lim: 50 exec/s: 0 rss: 71Mb L: 15/22 MS: 1 ChangeBit- 00:07:01.773 [2024-05-14 11:43:28.682507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17361641481391046409 len:61681 00:07:01.773 [2024-05-14 11:43:28.682534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.773 #36 NEW cov: 12053 ft: 13830 corp: 14/206b lim: 50 exec/s: 0 rss: 71Mb L: 15/22 MS: 1 ShuffleBytes- 00:07:01.773 [2024-05-14 11:43:28.722770] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4294901760 len:1 00:07:01.773 [2024-05-14 11:43:28.722798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.773 [2024-05-14 11:43:28.722850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:256 00:07:01.773 [2024-05-14 11:43:28.722867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.773 [2024-05-14 11:43:28.722920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17361641477262864624 len:61681 00:07:01.773 [2024-05-14 11:43:28.722936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.773 #37 NEW cov: 12053 ft: 14100 corp: 15/238b lim: 50 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:01.773 [2024-05-14 11:43:28.762895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4294901760 len:1 00:07:01.773 [2024-05-14 11:43:28.762922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.773 [2024-05-14 11:43:28.762955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:256 00:07:01.773 [2024-05-14 11:43:28.762970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.773 [2024-05-14 11:43:28.763021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17361922952239575280 len:61681 00:07:01.773 [2024-05-14 11:43:28.763037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.773 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:01.773 #38 NEW cov: 12076 ft: 14148 corp: 16/270b lim: 50 exec/s: 0 rss: 71Mb L: 32/32 MS: 1 ChangeBit- 00:07:01.773 [2024-05-14 11:43:28.812809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17505756669214257392 len:61681 00:07:01.773 [2024-05-14 11:43:28.812836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.773 #44 NEW cov: 12076 ft: 14230 corp: 17/285b lim: 50 exec/s: 0 rss: 71Mb L: 15/32 MS: 1 CopyPart- 00:07:01.773 [2024-05-14 11:43:28.852921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17505756669214257392 len:61665 00:07:01.773 [2024-05-14 11:43:28.852948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.032 #45 NEW cov: 12076 ft: 14257 corp: 18/300b lim: 50 exec/s: 45 rss: 71Mb L: 15/32 MS: 1 ChangeBit- 00:07:02.032 [2024-05-14 11:43:28.893023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17361641481391046409 len:61681 00:07:02.032 [2024-05-14 11:43:28.893050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.032 #46 NEW cov: 12076 ft: 14265 corp: 19/315b lim: 50 exec/s: 46 rss: 71Mb L: 15/32 MS: 1 ChangeBinInt- 00:07:02.032 [2024-05-14 11:43:28.923120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17505755801630863601 len:61681 00:07:02.032 [2024-05-14 11:43:28.923151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.032 #47 NEW cov: 12076 ft: 14289 corp: 20/330b lim: 50 exec/s: 47 rss: 72Mb L: 15/32 MS: 1 ChangeByte- 00:07:02.032 [2024-05-14 11:43:28.963214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17361643680161657072 len:61665 00:07:02.032 [2024-05-14 11:43:28.963241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.032 #48 NEW cov: 12076 ft: 14314 corp: 21/345b lim: 50 exec/s: 48 rss: 72Mb L: 15/32 MS: 1 ShuffleBytes- 00:07:02.032 [2024-05-14 11:43:29.003365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17468601972288450800 len:61665 00:07:02.032 [2024-05-14 11:43:29.003397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.032 #49 NEW cov: 12076 ft: 14328 corp: 22/360b lim: 50 exec/s: 49 rss: 72Mb L: 15/32 MS: 1 ChangeByte- 00:07:02.032 [2024-05-14 11:43:29.043464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17442455817708171505 len:61681 00:07:02.032 [2024-05-14 11:43:29.043491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.032 #50 NEW cov: 12076 ft: 14342 corp: 23/375b lim: 50 exec/s: 50 rss: 72Mb L: 15/32 MS: 1 ChangeBinInt- 00:07:02.032 [2024-05-14 11:43:29.083936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073457893375 len:65536 00:07:02.032 [2024-05-14 11:43:29.083965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.032 [2024-05-14 11:43:29.084020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:02.032 [2024-05-14 11:43:29.084037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.032 [2024-05-14 11:43:29.084088] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446727516610625535 len:61683 00:07:02.032 [2024-05-14 11:43:29.084105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.032 [2024-05-14 11:43:29.084155] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16208719976531554544 len:45067 00:07:02.032 [2024-05-14 11:43:29.084172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.032 #51 NEW cov: 12076 ft: 14583 corp: 24/415b lim: 50 exec/s: 51 rss: 72Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:02.291 [2024-05-14 11:43:29.123721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17468601972288450800 len:61665 00:07:02.292 [2024-05-14 11:43:29.123750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.292 #52 NEW cov: 12076 ft: 14603 corp: 25/431b lim: 50 exec/s: 52 rss: 72Mb L: 16/40 MS: 1 InsertByte- 00:07:02.292 [2024-05-14 11:43:29.163808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17456217073313181936 len:61681 00:07:02.292 [2024-05-14 11:43:29.163836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.292 #53 NEW cov: 12076 ft: 14617 corp: 26/447b lim: 50 exec/s: 53 rss: 72Mb L: 16/40 MS: 1 InsertByte- 00:07:02.292 [2024-05-14 11:43:29.204131] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4294901760 len:1 00:07:02.292 [2024-05-14 11:43:29.204159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.292 [2024-05-14 11:43:29.204191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:256 00:07:02.292 [2024-05-14 11:43:29.204206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.292 [2024-05-14 11:43:29.204259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17361922952239575280 len:61681 00:07:02.292 [2024-05-14 11:43:29.204274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.292 #54 NEW cov: 12076 ft: 14623 corp: 27/479b lim: 50 exec/s: 54 rss: 72Mb L: 32/40 MS: 1 ShuffleBytes- 00:07:02.292 [2024-05-14 11:43:29.244136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17506038144190968048 len:61681 00:07:02.292 [2024-05-14 11:43:29.244164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.292 [2024-05-14 11:43:29.244193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:720857419398312176 len:1 00:07:02.292 [2024-05-14 11:43:29.244209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.292 #55 NEW cov: 12076 ft: 14631 corp: 28/502b lim: 50 exec/s: 55 rss: 72Mb L: 23/40 MS: 1 CMP- DE: "\001\000\000\000\000\000\000?"- 00:07:02.292 [2024-05-14 11:43:29.284144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17505721484842168560 len:61696 00:07:02.292 [2024-05-14 11:43:29.284171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.292 #56 NEW cov: 12076 ft: 14636 corp: 29/515b lim: 50 exec/s: 56 rss: 72Mb L: 13/40 MS: 1 CrossOver- 00:07:02.292 [2024-05-14 11:43:29.324278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17361641481138401520 len:61681 00:07:02.292 [2024-05-14 11:43:29.324320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.292 #57 NEW cov: 12076 ft: 14641 corp: 30/530b lim: 50 exec/s: 57 rss: 72Mb L: 15/40 MS: 1 ChangeBit- 00:07:02.292 [2024-05-14 11:43:29.364485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17505756669214257392 len:61696 00:07:02.292 [2024-05-14 11:43:29.364513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.292 [2024-05-14 11:43:29.364560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:02.292 [2024-05-14 11:43:29.364577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.551 #58 NEW cov: 12076 ft: 14667 corp: 31/557b lim: 50 exec/s: 58 rss: 72Mb L: 27/40 MS: 1 InsertRepeatedBytes- 00:07:02.551 [2024-05-14 11:43:29.404576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17437937761220882673 len:1 00:07:02.551 [2024-05-14 11:43:29.404604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.551 [2024-05-14 11:43:29.404637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17361641481138349808 len:61451 00:07:02.551 [2024-05-14 11:43:29.404653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.551 #59 NEW cov: 12076 ft: 14685 corp: 32/577b lim: 50 exec/s: 59 rss: 72Mb L: 20/40 MS: 1 InsertRepeatedBytes- 00:07:02.551 [2024-05-14 11:43:29.444945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073457893375 len:65536 00:07:02.551 [2024-05-14 11:43:29.444976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.551 [2024-05-14 11:43:29.445009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65528 00:07:02.551 [2024-05-14 11:43:29.445025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.551 [2024-05-14 11:43:29.445077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446727516610625535 len:61683 00:07:02.551 [2024-05-14 11:43:29.445093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.551 [2024-05-14 11:43:29.445144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16208719976531554544 len:45067 00:07:02.551 [2024-05-14 11:43:29.445159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:02.551 #60 NEW cov: 12076 ft: 14746 corp: 33/617b lim: 50 exec/s: 60 rss: 73Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:02.551 [2024-05-14 11:43:29.494925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4294901760 len:256 00:07:02.551 [2024-05-14 11:43:29.494953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.551 [2024-05-14 11:43:29.495006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17361922952239575280 len:61681 00:07:02.551 [2024-05-14 11:43:29.495023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.551 #61 NEW cov: 12076 ft: 14754 corp: 34/639b lim: 50 exec/s: 61 rss: 73Mb L: 22/40 MS: 1 EraseBytes- 00:07:02.551 [2024-05-14 11:43:29.534895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446742986831167487 len:48625 00:07:02.551 [2024-05-14 11:43:29.534923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.551 #62 NEW cov: 12076 ft: 14765 corp: 35/654b lim: 50 exec/s: 62 rss: 73Mb L: 15/40 MS: 1 CMP- DE: "\377\377\377\377\377\377\002\275"- 00:07:02.551 [2024-05-14 11:43:29.575238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4294901760 len:1 00:07:02.551 [2024-05-14 11:43:29.575266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.551 [2024-05-14 11:43:29.575310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:250 00:07:02.551 [2024-05-14 11:43:29.575326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.551 [2024-05-14 11:43:29.575377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17361922952239575280 len:61681 00:07:02.551 [2024-05-14 11:43:29.575398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.551 #63 NEW cov: 12076 ft: 14774 corp: 36/686b lim: 50 exec/s: 63 rss: 73Mb L: 32/40 MS: 1 ChangeBinInt- 00:07:02.551 [2024-05-14 11:43:29.615306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4294901823 len:1 00:07:02.551 [2024-05-14 11:43:29.615334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.551 [2024-05-14 11:43:29.615362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:07:02.551 [2024-05-14 11:43:29.615383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.551 [2024-05-14 11:43:29.615440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17361642580886548720 len:61681 00:07:02.551 [2024-05-14 11:43:29.615456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.551 #64 NEW cov: 12076 ft: 14782 corp: 37/719b lim: 50 exec/s: 64 rss: 73Mb L: 33/40 MS: 1 InsertByte- 00:07:02.810 [2024-05-14 11:43:29.655226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17442455817708171505 len:61681 00:07:02.810 [2024-05-14 11:43:29.655253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.810 #65 NEW cov: 12076 ft: 14831 corp: 38/734b lim: 50 exec/s: 65 rss: 73Mb L: 15/40 MS: 1 ShuffleBytes- 00:07:02.810 [2024-05-14 11:43:29.695588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:4294901760 len:1 00:07:02.810 [2024-05-14 11:43:29.695615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.810 [2024-05-14 11:43:29.695655] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:256 00:07:02.810 [2024-05-14 11:43:29.695671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.810 [2024-05-14 11:43:29.695723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17361922956367757065 len:61681 00:07:02.810 [2024-05-14 11:43:29.695739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.810 #66 NEW cov: 12076 ft: 14838 corp: 39/766b lim: 50 exec/s: 66 rss: 73Mb L: 32/40 MS: 1 PersAutoDict- DE: "\377\377\377\011"- 00:07:02.810 [2024-05-14 11:43:29.735538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17437937761220882673 len:1 00:07:02.810 [2024-05-14 11:43:29.735564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.810 [2024-05-14 11:43:29.735594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16505516914063189744 len:25150 00:07:02.810 [2024-05-14 11:43:29.735610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.810 #67 NEW cov: 12076 ft: 14843 corp: 40/794b lim: 50 exec/s: 67 rss: 73Mb L: 28/40 MS: 1 CMP- DE: "\345\017`.b=\205\000"- 00:07:02.810 [2024-05-14 11:43:29.775779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17468601972288450800 len:61665 00:07:02.810 [2024-05-14 11:43:29.775806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.810 [2024-05-14 11:43:29.775841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:716337258629693424 len:61681 00:07:02.810 [2024-05-14 11:43:29.775856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.810 [2024-05-14 11:43:29.775904] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17361641470132488944 len:61681 00:07:02.810 [2024-05-14 11:43:29.775919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.810 #68 NEW cov: 12076 ft: 14860 corp: 41/825b lim: 50 exec/s: 68 rss: 73Mb L: 31/40 MS: 1 CrossOver- 00:07:02.810 [2024-05-14 11:43:29.815695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17505756669214257392 len:61681 00:07:02.810 [2024-05-14 11:43:29.815723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.810 #69 NEW cov: 12076 ft: 14864 corp: 42/840b lim: 50 exec/s: 69 rss: 73Mb L: 15/40 MS: 1 ShuffleBytes- 00:07:02.810 [2024-05-14 11:43:29.856025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:17361641481391046409 len:61681 00:07:02.810 [2024-05-14 11:43:29.856053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.810 [2024-05-14 11:43:29.856096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:720575944421601520 len:256 00:07:02.810 [2024-05-14 11:43:29.856112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:02.810 [2024-05-14 11:43:29.856163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17361922956367757065 len:61681 00:07:02.810 [2024-05-14 11:43:29.856195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:02.810 #70 NEW cov: 12076 ft: 14868 corp: 43/872b lim: 50 exec/s: 35 rss: 73Mb L: 32/40 MS: 1 CrossOver- 00:07:02.810 #70 DONE cov: 12076 ft: 14868 corp: 43/872b lim: 50 exec/s: 35 rss: 73Mb 00:07:02.810 ###### Recommended dictionary. ###### 00:07:02.810 "\377\377\377\011" # Uses: 1 00:07:02.810 "\001\000\000\000\000\000\000?" # Uses: 0 00:07:02.810 "\377\377\377\377\377\377\002\275" # Uses: 0 00:07:02.811 "\345\017`.b=\205\000" # Uses: 0 00:07:02.811 ###### End of recommended dictionary. ###### 00:07:02.811 Done 70 runs in 2 second(s) 00:07:02.811 [2024-05-14 11:43:29.885252] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4420 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:03.070 11:43:30 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:07:03.070 [2024-05-14 11:43:30.049914] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:07:03.070 [2024-05-14 11:43:30.049989] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642002 ] 00:07:03.070 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.329 [2024-05-14 11:43:30.230106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.329 [2024-05-14 11:43:30.297925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.329 [2024-05-14 11:43:30.357301] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.329 [2024-05-14 11:43:30.373254] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:03.329 [2024-05-14 11:43:30.373681] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:03.329 INFO: Running with entropic power schedule (0xFF, 100). 00:07:03.329 INFO: Seed: 2392770301 00:07:03.329 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:07:03.329 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:07:03.329 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:07:03.329 INFO: A corpus is not provided, starting from an empty corpus 00:07:03.329 #2 INITED exec/s: 0 rss: 63Mb 00:07:03.329 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:03.329 This may also happen if the target rejected all inputs we tried so far 00:07:03.587 [2024-05-14 11:43:30.421976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.587 [2024-05-14 11:43:30.422007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.587 [2024-05-14 11:43:30.422039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.587 [2024-05-14 11:43:30.422054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.587 [2024-05-14 11:43:30.422106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:03.587 [2024-05-14 11:43:30.422122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.846 NEW_FUNC[1/687]: 0x4a3fd0 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:07:03.846 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:03.846 #8 NEW cov: 11890 ft: 11869 corp: 2/59b lim: 90 exec/s: 0 rss: 70Mb L: 58/58 MS: 1 InsertRepeatedBytes- 00:07:03.846 [2024-05-14 11:43:30.752874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.846 [2024-05-14 11:43:30.752911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.846 [2024-05-14 11:43:30.752956] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.846 [2024-05-14 11:43:30.752973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.846 [2024-05-14 11:43:30.753027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:03.846 [2024-05-14 11:43:30.753044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.846 #9 NEW cov: 12020 ft: 12417 corp: 3/117b lim: 90 exec/s: 0 rss: 70Mb L: 58/58 MS: 1 ChangeBinInt- 00:07:03.846 [2024-05-14 11:43:30.802945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.846 [2024-05-14 11:43:30.802973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.846 [2024-05-14 11:43:30.803020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.846 [2024-05-14 11:43:30.803036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.846 [2024-05-14 11:43:30.803094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:03.846 [2024-05-14 11:43:30.803109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.846 #10 NEW cov: 12026 ft: 12836 corp: 4/175b lim: 90 exec/s: 0 rss: 71Mb L: 58/58 MS: 1 ChangeByte- 00:07:03.846 [2024-05-14 11:43:30.842732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.846 [2024-05-14 11:43:30.842760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.846 #16 NEW cov: 12111 ft: 13926 corp: 5/195b lim: 90 exec/s: 0 rss: 71Mb L: 20/58 MS: 1 CrossOver- 00:07:03.846 [2024-05-14 11:43:30.893208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.846 [2024-05-14 11:43:30.893237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.846 [2024-05-14 11:43:30.893271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:03.846 [2024-05-14 11:43:30.893286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.846 [2024-05-14 11:43:30.893344] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:03.846 [2024-05-14 11:43:30.893360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.846 #17 NEW cov: 12111 ft: 13973 corp: 6/251b lim: 90 exec/s: 0 rss: 71Mb L: 56/58 MS: 1 EraseBytes- 00:07:03.846 [2024-05-14 11:43:30.933020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:03.846 [2024-05-14 11:43:30.933049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.105 #18 NEW cov: 12111 ft: 14008 corp: 7/283b lim: 90 exec/s: 0 rss: 71Mb L: 32/58 MS: 1 CrossOver- 00:07:04.105 [2024-05-14 11:43:30.973441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.105 [2024-05-14 11:43:30.973470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.105 [2024-05-14 11:43:30.973519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.105 [2024-05-14 11:43:30.973535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.105 [2024-05-14 11:43:30.973593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.105 [2024-05-14 11:43:30.973608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.105 #19 NEW cov: 12111 ft: 14042 corp: 8/342b lim: 90 exec/s: 0 rss: 71Mb L: 59/59 MS: 1 InsertRepeatedBytes- 00:07:04.105 [2024-05-14 11:43:31.023615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.105 [2024-05-14 11:43:31.023643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.105 [2024-05-14 11:43:31.023678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.105 [2024-05-14 11:43:31.023694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.105 [2024-05-14 11:43:31.023797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.105 [2024-05-14 11:43:31.023813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.105 #20 NEW cov: 12111 ft: 14089 corp: 9/401b lim: 90 exec/s: 0 rss: 71Mb L: 59/59 MS: 1 InsertByte- 00:07:04.105 [2024-05-14 11:43:31.063748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.105 [2024-05-14 11:43:31.063776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.105 [2024-05-14 11:43:31.063812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.105 [2024-05-14 11:43:31.063828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.105 [2024-05-14 11:43:31.063882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.105 [2024-05-14 11:43:31.063898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.105 #21 NEW cov: 12111 ft: 14163 corp: 10/460b lim: 90 exec/s: 0 rss: 71Mb L: 59/59 MS: 1 InsertByte- 00:07:04.105 [2024-05-14 11:43:31.103816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.105 [2024-05-14 11:43:31.103844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.105 [2024-05-14 11:43:31.103890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.105 [2024-05-14 11:43:31.103905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.105 [2024-05-14 11:43:31.103961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.105 [2024-05-14 11:43:31.103976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.105 #22 NEW cov: 12111 ft: 14202 corp: 11/528b lim: 90 exec/s: 0 rss: 71Mb L: 68/68 MS: 1 CrossOver- 00:07:04.105 [2024-05-14 11:43:31.143908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.105 [2024-05-14 11:43:31.143937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.105 [2024-05-14 11:43:31.143976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.106 [2024-05-14 11:43:31.143992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.106 [2024-05-14 11:43:31.144050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.106 [2024-05-14 11:43:31.144066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.106 #23 NEW cov: 12111 ft: 14220 corp: 12/586b lim: 90 exec/s: 0 rss: 72Mb L: 58/68 MS: 1 ChangeBit- 00:07:04.106 [2024-05-14 11:43:31.184032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.106 [2024-05-14 11:43:31.184061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.106 [2024-05-14 11:43:31.184106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.106 [2024-05-14 11:43:31.184122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.106 [2024-05-14 11:43:31.184180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.106 [2024-05-14 11:43:31.184199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.364 #24 NEW cov: 12111 ft: 14242 corp: 13/645b lim: 90 exec/s: 0 rss: 72Mb L: 59/68 MS: 1 CopyPart- 00:07:04.364 [2024-05-14 11:43:31.224182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.364 [2024-05-14 11:43:31.224210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.364 [2024-05-14 11:43:31.224260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.364 [2024-05-14 11:43:31.224276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.364 [2024-05-14 11:43:31.224334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.364 [2024-05-14 11:43:31.224348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.364 #25 NEW cov: 12111 ft: 14314 corp: 14/705b lim: 90 exec/s: 0 rss: 72Mb L: 60/68 MS: 1 InsertByte- 00:07:04.364 [2024-05-14 11:43:31.264267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.364 [2024-05-14 11:43:31.264295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.364 [2024-05-14 11:43:31.264331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.364 [2024-05-14 11:43:31.264346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.364 [2024-05-14 11:43:31.264406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.364 [2024-05-14 11:43:31.264421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.365 #26 NEW cov: 12111 ft: 14333 corp: 15/772b lim: 90 exec/s: 0 rss: 72Mb L: 67/68 MS: 1 InsertRepeatedBytes- 00:07:04.365 [2024-05-14 11:43:31.304375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.365 [2024-05-14 11:43:31.304406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.365 [2024-05-14 11:43:31.304457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.365 [2024-05-14 11:43:31.304470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.365 [2024-05-14 11:43:31.304544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.365 [2024-05-14 11:43:31.304561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.365 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:04.365 #27 NEW cov: 12134 ft: 14368 corp: 16/832b lim: 90 exec/s: 0 rss: 72Mb L: 60/68 MS: 1 CopyPart- 00:07:04.365 [2024-05-14 11:43:31.344535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.365 [2024-05-14 11:43:31.344562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.365 [2024-05-14 11:43:31.344596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.365 [2024-05-14 11:43:31.344612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.365 [2024-05-14 11:43:31.344670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.365 [2024-05-14 11:43:31.344689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.365 #28 NEW cov: 12134 ft: 14390 corp: 17/890b lim: 90 exec/s: 0 rss: 72Mb L: 58/68 MS: 1 ChangeBit- 00:07:04.365 [2024-05-14 11:43:31.384608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.365 [2024-05-14 11:43:31.384636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.365 [2024-05-14 11:43:31.384671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.365 [2024-05-14 11:43:31.384687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.365 [2024-05-14 11:43:31.384745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.365 [2024-05-14 11:43:31.384760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.365 #29 NEW cov: 12134 ft: 14400 corp: 18/959b lim: 90 exec/s: 29 rss: 72Mb L: 69/69 MS: 1 InsertRepeatedBytes- 00:07:04.365 [2024-05-14 11:43:31.424757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.365 [2024-05-14 11:43:31.424784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.365 [2024-05-14 11:43:31.424832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.365 [2024-05-14 11:43:31.424848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.365 [2024-05-14 11:43:31.424906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.365 [2024-05-14 11:43:31.424921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.365 #30 NEW cov: 12134 ft: 14416 corp: 19/1019b lim: 90 exec/s: 30 rss: 72Mb L: 60/69 MS: 1 ChangeByte- 00:07:04.624 [2024-05-14 11:43:31.464877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.624 [2024-05-14 11:43:31.464905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.624 [2024-05-14 11:43:31.464958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.624 [2024-05-14 11:43:31.464974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.624 [2024-05-14 11:43:31.465031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.624 [2024-05-14 11:43:31.465046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.624 #31 NEW cov: 12134 ft: 14473 corp: 20/1079b lim: 90 exec/s: 31 rss: 72Mb L: 60/69 MS: 1 InsertByte- 00:07:04.624 [2024-05-14 11:43:31.504831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.624 [2024-05-14 11:43:31.504857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.624 [2024-05-14 11:43:31.504913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.624 [2024-05-14 11:43:31.504929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.624 #34 NEW cov: 12134 ft: 14761 corp: 21/1119b lim: 90 exec/s: 34 rss: 72Mb L: 40/69 MS: 3 CrossOver-ChangeBinInt-InsertRepeatedBytes- 00:07:04.624 [2024-05-14 11:43:31.544930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.624 [2024-05-14 11:43:31.544958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.624 [2024-05-14 11:43:31.545021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.625 [2024-05-14 11:43:31.545037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.625 #35 NEW cov: 12134 ft: 14802 corp: 22/1159b lim: 90 exec/s: 35 rss: 72Mb L: 40/69 MS: 1 CMP- DE: "\000\000\177'@\"\260\201"- 00:07:04.625 [2024-05-14 11:43:31.595093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.625 [2024-05-14 11:43:31.595121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.625 [2024-05-14 11:43:31.595185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.625 [2024-05-14 11:43:31.595202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.625 #36 NEW cov: 12134 ft: 14839 corp: 23/1199b lim: 90 exec/s: 36 rss: 72Mb L: 40/69 MS: 1 EraseBytes- 00:07:04.625 [2024-05-14 11:43:31.635362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.625 [2024-05-14 11:43:31.635394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.625 [2024-05-14 11:43:31.635438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.625 [2024-05-14 11:43:31.635454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.625 [2024-05-14 11:43:31.635511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.625 [2024-05-14 11:43:31.635527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.625 #37 NEW cov: 12134 ft: 14957 corp: 24/1255b lim: 90 exec/s: 37 rss: 72Mb L: 56/69 MS: 1 CopyPart- 00:07:04.625 [2024-05-14 11:43:31.675440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.625 [2024-05-14 11:43:31.675469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.625 [2024-05-14 11:43:31.675513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.625 [2024-05-14 11:43:31.675528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.625 [2024-05-14 11:43:31.675586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.625 [2024-05-14 11:43:31.675602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.625 #38 NEW cov: 12134 ft: 14976 corp: 25/1311b lim: 90 exec/s: 38 rss: 72Mb L: 56/69 MS: 1 ShuffleBytes- 00:07:04.884 [2024-05-14 11:43:31.715238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.884 [2024-05-14 11:43:31.715266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.884 #39 NEW cov: 12134 ft: 15048 corp: 26/1343b lim: 90 exec/s: 39 rss: 72Mb L: 32/69 MS: 1 ChangeBinInt- 00:07:04.884 [2024-05-14 11:43:31.755350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.884 [2024-05-14 11:43:31.755377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.884 #40 NEW cov: 12134 ft: 15099 corp: 27/1377b lim: 90 exec/s: 40 rss: 73Mb L: 34/69 MS: 1 EraseBytes- 00:07:04.884 [2024-05-14 11:43:31.805665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.884 [2024-05-14 11:43:31.805697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.884 [2024-05-14 11:43:31.805762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.884 [2024-05-14 11:43:31.805779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.884 #41 NEW cov: 12134 ft: 15121 corp: 28/1423b lim: 90 exec/s: 41 rss: 73Mb L: 46/69 MS: 1 EraseBytes- 00:07:04.884 [2024-05-14 11:43:31.855962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.884 [2024-05-14 11:43:31.855990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.884 [2024-05-14 11:43:31.856031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.884 [2024-05-14 11:43:31.856047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.884 [2024-05-14 11:43:31.856123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.884 [2024-05-14 11:43:31.856140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.884 #42 NEW cov: 12134 ft: 15142 corp: 29/1483b lim: 90 exec/s: 42 rss: 73Mb L: 60/69 MS: 1 CopyPart- 00:07:04.884 [2024-05-14 11:43:31.906131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.884 [2024-05-14 11:43:31.906159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.884 [2024-05-14 11:43:31.906200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.884 [2024-05-14 11:43:31.906217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.884 [2024-05-14 11:43:31.906274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.884 [2024-05-14 11:43:31.906289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.884 #43 NEW cov: 12134 ft: 15157 corp: 30/1542b lim: 90 exec/s: 43 rss: 73Mb L: 59/69 MS: 1 ChangeBit- 00:07:04.884 [2024-05-14 11:43:31.946237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:04.884 [2024-05-14 11:43:31.946264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.884 [2024-05-14 11:43:31.946305] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:04.884 [2024-05-14 11:43:31.946321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.885 [2024-05-14 11:43:31.946383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:04.885 [2024-05-14 11:43:31.946400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.144 #44 NEW cov: 12134 ft: 15168 corp: 31/1598b lim: 90 exec/s: 44 rss: 73Mb L: 56/69 MS: 1 ChangeByte- 00:07:05.144 [2024-05-14 11:43:31.986659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:05.144 [2024-05-14 11:43:31.986686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.144 [2024-05-14 11:43:31.986735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:05.144 [2024-05-14 11:43:31.986751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.144 [2024-05-14 11:43:31.986825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:05.144 [2024-05-14 11:43:31.986842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.144 [2024-05-14 11:43:31.986901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:05.144 [2024-05-14 11:43:31.986916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.144 [2024-05-14 11:43:31.986974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:05.144 [2024-05-14 11:43:31.986990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:05.144 #45 NEW cov: 12134 ft: 15560 corp: 32/1688b lim: 90 exec/s: 45 rss: 73Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:07:05.144 [2024-05-14 11:43:32.026436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:05.144 [2024-05-14 11:43:32.026464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.144 [2024-05-14 11:43:32.026503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:05.144 [2024-05-14 11:43:32.026517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.144 [2024-05-14 11:43:32.026577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:05.144 [2024-05-14 11:43:32.026593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.144 #46 NEW cov: 12134 ft: 15574 corp: 33/1747b lim: 90 exec/s: 46 rss: 73Mb L: 59/90 MS: 1 ChangeBinInt- 00:07:05.144 [2024-05-14 11:43:32.066418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:05.144 [2024-05-14 11:43:32.066447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.144 [2024-05-14 11:43:32.066496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:05.145 [2024-05-14 11:43:32.066513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.145 #47 NEW cov: 12134 ft: 15575 corp: 34/1793b lim: 90 exec/s: 47 rss: 73Mb L: 46/90 MS: 1 InsertRepeatedBytes- 00:07:05.145 [2024-05-14 11:43:32.106499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:05.145 [2024-05-14 11:43:32.106525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.145 [2024-05-14 11:43:32.106575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:05.145 [2024-05-14 11:43:32.106591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.145 #48 NEW cov: 12134 ft: 15611 corp: 35/1833b lim: 90 exec/s: 48 rss: 73Mb L: 40/90 MS: 1 PersAutoDict- DE: "\000\000\177'@\"\260\201"- 00:07:05.145 [2024-05-14 11:43:32.156777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:05.145 [2024-05-14 11:43:32.156805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.145 [2024-05-14 11:43:32.156841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:05.145 [2024-05-14 11:43:32.156855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.145 [2024-05-14 11:43:32.156916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:05.145 [2024-05-14 11:43:32.156932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.145 #49 NEW cov: 12134 ft: 15670 corp: 36/1892b lim: 90 exec/s: 49 rss: 73Mb L: 59/90 MS: 1 InsertByte- 00:07:05.145 [2024-05-14 11:43:32.196925] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:05.145 [2024-05-14 11:43:32.196953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.145 [2024-05-14 11:43:32.196991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:05.145 [2024-05-14 11:43:32.197007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.145 [2024-05-14 11:43:32.197067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:05.145 [2024-05-14 11:43:32.197084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.145 #50 NEW cov: 12134 ft: 15675 corp: 37/1953b lim: 90 exec/s: 50 rss: 73Mb L: 61/90 MS: 1 InsertByte- 00:07:05.406 [2024-05-14 11:43:32.237027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:05.406 [2024-05-14 11:43:32.237054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.406 [2024-05-14 11:43:32.237099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:05.406 [2024-05-14 11:43:32.237116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.406 [2024-05-14 11:43:32.237174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:05.406 [2024-05-14 11:43:32.237190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.406 #51 NEW cov: 12134 ft: 15681 corp: 38/2014b lim: 90 exec/s: 51 rss: 74Mb L: 61/90 MS: 1 PersAutoDict- DE: "\000\000\177'@\"\260\201"- 00:07:05.406 [2024-05-14 11:43:32.277414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:05.406 [2024-05-14 11:43:32.277443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.406 [2024-05-14 11:43:32.277493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:05.406 [2024-05-14 11:43:32.277506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.406 [2024-05-14 11:43:32.277564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:05.406 [2024-05-14 11:43:32.277577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.406 [2024-05-14 11:43:32.277636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:07:05.406 [2024-05-14 11:43:32.277652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:05.407 [2024-05-14 11:43:32.277711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:07:05.407 [2024-05-14 11:43:32.277727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:05.407 #52 NEW cov: 12134 ft: 15685 corp: 39/2104b lim: 90 exec/s: 52 rss: 74Mb L: 90/90 MS: 1 CopyPart- 00:07:05.407 [2024-05-14 11:43:32.327294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:05.407 [2024-05-14 11:43:32.327325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.407 [2024-05-14 11:43:32.327361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:05.407 [2024-05-14 11:43:32.327375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.407 [2024-05-14 11:43:32.327438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:05.407 [2024-05-14 11:43:32.327454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.407 #53 NEW cov: 12134 ft: 15706 corp: 40/2163b lim: 90 exec/s: 53 rss: 74Mb L: 59/90 MS: 1 ChangeByte- 00:07:05.407 [2024-05-14 11:43:32.367402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:05.407 [2024-05-14 11:43:32.367429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.407 [2024-05-14 11:43:32.367478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:05.407 [2024-05-14 11:43:32.367494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.407 [2024-05-14 11:43:32.367564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:05.407 [2024-05-14 11:43:32.367581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.407 #54 NEW cov: 12134 ft: 15717 corp: 41/2222b lim: 90 exec/s: 54 rss: 74Mb L: 59/90 MS: 1 ChangeBit- 00:07:05.407 [2024-05-14 11:43:32.407500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:07:05.407 [2024-05-14 11:43:32.407535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:05.407 [2024-05-14 11:43:32.407580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:07:05.407 [2024-05-14 11:43:32.407597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:05.407 [2024-05-14 11:43:32.407654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:07:05.407 [2024-05-14 11:43:32.407670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:05.407 #55 NEW cov: 12134 ft: 15769 corp: 42/2291b lim: 90 exec/s: 27 rss: 74Mb L: 69/90 MS: 1 CopyPart- 00:07:05.407 #55 DONE cov: 12134 ft: 15769 corp: 42/2291b lim: 90 exec/s: 27 rss: 74Mb 00:07:05.407 ###### Recommended dictionary. ###### 00:07:05.407 "\000\000\177'@\"\260\201" # Uses: 2 00:07:05.407 ###### End of recommended dictionary. ###### 00:07:05.407 Done 55 runs in 2 second(s) 00:07:05.407 [2024-05-14 11:43:32.427276] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4421 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:05.668 11:43:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:07:05.668 [2024-05-14 11:43:32.594859] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:07:05.668 [2024-05-14 11:43:32.594949] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642489 ] 00:07:05.668 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.927 [2024-05-14 11:43:32.773745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.927 [2024-05-14 11:43:32.842009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.927 [2024-05-14 11:43:32.901461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.927 [2024-05-14 11:43:32.917415] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:05.927 [2024-05-14 11:43:32.917828] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:07:05.927 INFO: Running with entropic power schedule (0xFF, 100). 00:07:05.927 INFO: Seed: 644811626 00:07:05.927 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:07:05.927 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:07:05.927 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:07:05.927 INFO: A corpus is not provided, starting from an empty corpus 00:07:05.927 #2 INITED exec/s: 0 rss: 64Mb 00:07:05.927 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:05.927 This may also happen if the target rejected all inputs we tried so far 00:07:05.927 [2024-05-14 11:43:32.972974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:05.927 [2024-05-14 11:43:32.973004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.497 NEW_FUNC[1/687]: 0x4a71f0 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:07:06.497 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:06.497 #4 NEW cov: 11865 ft: 11864 corp: 2/11b lim: 50 exec/s: 0 rss: 70Mb L: 10/10 MS: 2 InsertByte-CMP- DE: "\001\205=d\223\3773L"- 00:07:06.497 [2024-05-14 11:43:33.304126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.497 [2024-05-14 11:43:33.304159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.497 [2024-05-14 11:43:33.304208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.497 [2024-05-14 11:43:33.304223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.497 [2024-05-14 11:43:33.304275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.497 [2024-05-14 11:43:33.304290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.497 [2024-05-14 11:43:33.304340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.497 [2024-05-14 11:43:33.304355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.497 #5 NEW cov: 11995 ft: 13313 corp: 3/57b lim: 50 exec/s: 0 rss: 70Mb L: 46/46 MS: 1 InsertRepeatedBytes- 00:07:06.497 [2024-05-14 11:43:33.354167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.497 [2024-05-14 11:43:33.354195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.497 [2024-05-14 11:43:33.354231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.497 [2024-05-14 11:43:33.354246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.497 [2024-05-14 11:43:33.354298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.497 [2024-05-14 11:43:33.354313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.497 [2024-05-14 11:43:33.354362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.497 [2024-05-14 11:43:33.354377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.497 #6 NEW cov: 12001 ft: 13561 corp: 4/104b lim: 50 exec/s: 0 rss: 70Mb L: 47/47 MS: 1 InsertByte- 00:07:06.497 [2024-05-14 11:43:33.404310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.497 [2024-05-14 11:43:33.404337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.497 [2024-05-14 11:43:33.404399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.497 [2024-05-14 11:43:33.404416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.497 [2024-05-14 11:43:33.404479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.497 [2024-05-14 11:43:33.404494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.497 [2024-05-14 11:43:33.404546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.497 [2024-05-14 11:43:33.404560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.497 #7 NEW cov: 12086 ft: 13805 corp: 5/151b lim: 50 exec/s: 0 rss: 71Mb L: 47/47 MS: 1 ChangeBinInt- 00:07:06.497 [2024-05-14 11:43:33.454010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.497 [2024-05-14 11:43:33.454036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.497 #12 NEW cov: 12086 ft: 13980 corp: 6/167b lim: 50 exec/s: 0 rss: 71Mb L: 16/47 MS: 5 ShuffleBytes-ShuffleBytes-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:07:06.497 [2024-05-14 11:43:33.494561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.497 [2024-05-14 11:43:33.494591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.497 [2024-05-14 11:43:33.494624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.497 [2024-05-14 11:43:33.494639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.497 [2024-05-14 11:43:33.494690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.497 [2024-05-14 11:43:33.494705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.497 [2024-05-14 11:43:33.494754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.497 [2024-05-14 11:43:33.494770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.497 #13 NEW cov: 12086 ft: 14017 corp: 7/216b lim: 50 exec/s: 0 rss: 71Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:07:06.497 [2024-05-14 11:43:33.534720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.497 [2024-05-14 11:43:33.534747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.497 [2024-05-14 11:43:33.534806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.497 [2024-05-14 11:43:33.534822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.497 [2024-05-14 11:43:33.534874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.497 [2024-05-14 11:43:33.534889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.497 [2024-05-14 11:43:33.534940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.497 [2024-05-14 11:43:33.534955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.497 #14 NEW cov: 12086 ft: 14134 corp: 8/265b lim: 50 exec/s: 0 rss: 71Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:07:06.497 [2024-05-14 11:43:33.584422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.497 [2024-05-14 11:43:33.584450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.755 #15 NEW cov: 12086 ft: 14174 corp: 9/275b lim: 50 exec/s: 0 rss: 71Mb L: 10/49 MS: 1 ChangeASCIIInt- 00:07:06.755 [2024-05-14 11:43:33.624488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.755 [2024-05-14 11:43:33.624514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.755 #16 NEW cov: 12086 ft: 14190 corp: 10/286b lim: 50 exec/s: 0 rss: 71Mb L: 11/49 MS: 1 InsertByte- 00:07:06.755 [2024-05-14 11:43:33.664641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.755 [2024-05-14 11:43:33.664668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.755 #17 NEW cov: 12086 ft: 14225 corp: 11/302b lim: 50 exec/s: 0 rss: 71Mb L: 16/49 MS: 1 ChangeBit- 00:07:06.755 [2024-05-14 11:43:33.705127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.755 [2024-05-14 11:43:33.705155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.755 [2024-05-14 11:43:33.705200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.755 [2024-05-14 11:43:33.705220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.755 [2024-05-14 11:43:33.705269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.755 [2024-05-14 11:43:33.705284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.755 [2024-05-14 11:43:33.705336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.756 [2024-05-14 11:43:33.705351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.756 #18 NEW cov: 12086 ft: 14264 corp: 12/349b lim: 50 exec/s: 0 rss: 71Mb L: 47/49 MS: 1 InsertByte- 00:07:06.756 [2024-05-14 11:43:33.745266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.756 [2024-05-14 11:43:33.745295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.756 [2024-05-14 11:43:33.745333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.756 [2024-05-14 11:43:33.745348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.756 [2024-05-14 11:43:33.745403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.756 [2024-05-14 11:43:33.745418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.756 [2024-05-14 11:43:33.745469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.756 [2024-05-14 11:43:33.745483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.756 #21 NEW cov: 12086 ft: 14320 corp: 13/395b lim: 50 exec/s: 0 rss: 71Mb L: 46/49 MS: 3 CrossOver-CopyPart-InsertRepeatedBytes- 00:07:06.756 [2024-05-14 11:43:33.795433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.756 [2024-05-14 11:43:33.795461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.756 [2024-05-14 11:43:33.795506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.756 [2024-05-14 11:43:33.795520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.756 [2024-05-14 11:43:33.795572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.756 [2024-05-14 11:43:33.795588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:06.756 [2024-05-14 11:43:33.795640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:06.756 [2024-05-14 11:43:33.795654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:06.756 #22 NEW cov: 12086 ft: 14387 corp: 14/442b lim: 50 exec/s: 0 rss: 71Mb L: 47/49 MS: 1 ChangeByte- 00:07:06.756 [2024-05-14 11:43:33.835373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:06.756 [2024-05-14 11:43:33.835405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:06.756 [2024-05-14 11:43:33.835441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:06.756 [2024-05-14 11:43:33.835456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:06.756 [2024-05-14 11:43:33.835509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:06.756 [2024-05-14 11:43:33.835544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.015 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:07.015 #23 NEW cov: 12109 ft: 14707 corp: 15/481b lim: 50 exec/s: 0 rss: 71Mb L: 39/49 MS: 1 InsertRepeatedBytes- 00:07:07.015 [2024-05-14 11:43:33.875197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.015 [2024-05-14 11:43:33.875225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.015 #24 NEW cov: 12109 ft: 14732 corp: 16/498b lim: 50 exec/s: 0 rss: 71Mb L: 17/49 MS: 1 InsertByte- 00:07:07.015 [2024-05-14 11:43:33.915735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.015 [2024-05-14 11:43:33.915763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.015 [2024-05-14 11:43:33.915809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.015 [2024-05-14 11:43:33.915825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.015 [2024-05-14 11:43:33.915875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.015 [2024-05-14 11:43:33.915891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.015 [2024-05-14 11:43:33.915942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:07.015 [2024-05-14 11:43:33.915956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.015 #25 NEW cov: 12109 ft: 14787 corp: 17/547b lim: 50 exec/s: 0 rss: 71Mb L: 49/49 MS: 1 PersAutoDict- DE: "\001\205=d\223\3773L"- 00:07:07.015 [2024-05-14 11:43:33.955433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.015 [2024-05-14 11:43:33.955461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.015 #26 NEW cov: 12109 ft: 14816 corp: 18/564b lim: 50 exec/s: 26 rss: 71Mb L: 17/49 MS: 1 InsertByte- 00:07:07.015 [2024-05-14 11:43:33.995535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.015 [2024-05-14 11:43:33.995562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.015 #29 NEW cov: 12109 ft: 14834 corp: 19/579b lim: 50 exec/s: 29 rss: 71Mb L: 15/49 MS: 3 ShuffleBytes-CrossOver-CrossOver- 00:07:07.015 [2024-05-14 11:43:34.035769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.015 [2024-05-14 11:43:34.035797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.015 [2024-05-14 11:43:34.035838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.015 [2024-05-14 11:43:34.035854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.015 #30 NEW cov: 12109 ft: 15128 corp: 20/599b lim: 50 exec/s: 30 rss: 72Mb L: 20/49 MS: 1 CopyPart- 00:07:07.015 [2024-05-14 11:43:34.086224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.015 [2024-05-14 11:43:34.086253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.015 [2024-05-14 11:43:34.086301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.015 [2024-05-14 11:43:34.086320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.015 [2024-05-14 11:43:34.086374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.015 [2024-05-14 11:43:34.086394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.015 [2024-05-14 11:43:34.086450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:07.015 [2024-05-14 11:43:34.086466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.274 #31 NEW cov: 12109 ft: 15138 corp: 21/646b lim: 50 exec/s: 31 rss: 72Mb L: 47/49 MS: 1 ChangeByte- 00:07:07.274 [2024-05-14 11:43:34.126177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.274 [2024-05-14 11:43:34.126205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.274 [2024-05-14 11:43:34.126255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.274 [2024-05-14 11:43:34.126271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.274 [2024-05-14 11:43:34.126323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.274 [2024-05-14 11:43:34.126339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.274 #32 NEW cov: 12109 ft: 15161 corp: 22/677b lim: 50 exec/s: 32 rss: 72Mb L: 31/49 MS: 1 EraseBytes- 00:07:07.274 [2024-05-14 11:43:34.176018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.274 [2024-05-14 11:43:34.176045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.274 #33 NEW cov: 12109 ft: 15169 corp: 23/688b lim: 50 exec/s: 33 rss: 72Mb L: 11/49 MS: 1 ChangeASCIIInt- 00:07:07.274 [2024-05-14 11:43:34.216605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.274 [2024-05-14 11:43:34.216632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.274 [2024-05-14 11:43:34.216689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.274 [2024-05-14 11:43:34.216705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.274 [2024-05-14 11:43:34.216757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.274 [2024-05-14 11:43:34.216772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.274 [2024-05-14 11:43:34.216824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:07.274 [2024-05-14 11:43:34.216839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.274 #34 NEW cov: 12109 ft: 15188 corp: 24/736b lim: 50 exec/s: 34 rss: 72Mb L: 48/49 MS: 1 CrossOver- 00:07:07.274 [2024-05-14 11:43:34.266729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.274 [2024-05-14 11:43:34.266756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.274 [2024-05-14 11:43:34.266803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.274 [2024-05-14 11:43:34.266819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.274 [2024-05-14 11:43:34.266875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.274 [2024-05-14 11:43:34.266891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.274 [2024-05-14 11:43:34.266944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:07.274 [2024-05-14 11:43:34.266959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.274 #35 NEW cov: 12109 ft: 15192 corp: 25/784b lim: 50 exec/s: 35 rss: 72Mb L: 48/49 MS: 1 ChangeASCIIInt- 00:07:07.274 [2024-05-14 11:43:34.316843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.274 [2024-05-14 11:43:34.316869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.274 [2024-05-14 11:43:34.316916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.274 [2024-05-14 11:43:34.316931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.274 [2024-05-14 11:43:34.316982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.274 [2024-05-14 11:43:34.316998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.274 [2024-05-14 11:43:34.317050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:07.274 [2024-05-14 11:43:34.317066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.274 #36 NEW cov: 12109 ft: 15221 corp: 26/833b lim: 50 exec/s: 36 rss: 72Mb L: 49/49 MS: 1 InsertByte- 00:07:07.534 [2024-05-14 11:43:34.366604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.534 [2024-05-14 11:43:34.366630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.534 #37 NEW cov: 12109 ft: 15249 corp: 27/844b lim: 50 exec/s: 37 rss: 72Mb L: 11/49 MS: 1 ShuffleBytes- 00:07:07.534 [2024-05-14 11:43:34.406933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.534 [2024-05-14 11:43:34.406960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.534 [2024-05-14 11:43:34.406992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.534 [2024-05-14 11:43:34.407008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.534 [2024-05-14 11:43:34.407061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.534 [2024-05-14 11:43:34.407076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.534 #38 NEW cov: 12109 ft: 15266 corp: 28/879b lim: 50 exec/s: 38 rss: 72Mb L: 35/49 MS: 1 CrossOver- 00:07:07.534 [2024-05-14 11:43:34.446895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.534 [2024-05-14 11:43:34.446921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.534 [2024-05-14 11:43:34.446954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.534 [2024-05-14 11:43:34.446970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.534 #39 NEW cov: 12109 ft: 15312 corp: 29/902b lim: 50 exec/s: 39 rss: 72Mb L: 23/49 MS: 1 EraseBytes- 00:07:07.534 [2024-05-14 11:43:34.496913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.534 [2024-05-14 11:43:34.496940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.534 #40 NEW cov: 12109 ft: 15316 corp: 30/920b lim: 50 exec/s: 40 rss: 72Mb L: 18/49 MS: 1 CopyPart- 00:07:07.534 [2024-05-14 11:43:34.537153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.534 [2024-05-14 11:43:34.537179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.534 [2024-05-14 11:43:34.537210] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.534 [2024-05-14 11:43:34.537225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.534 #41 NEW cov: 12109 ft: 15323 corp: 31/943b lim: 50 exec/s: 41 rss: 73Mb L: 23/49 MS: 1 CopyPart- 00:07:07.534 [2024-05-14 11:43:34.587579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.534 [2024-05-14 11:43:34.587606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.534 [2024-05-14 11:43:34.587651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.534 [2024-05-14 11:43:34.587666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.534 [2024-05-14 11:43:34.587720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.534 [2024-05-14 11:43:34.587734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.534 [2024-05-14 11:43:34.587786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:07.534 [2024-05-14 11:43:34.587801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.534 #42 NEW cov: 12109 ft: 15334 corp: 32/991b lim: 50 exec/s: 42 rss: 73Mb L: 48/49 MS: 1 InsertByte- 00:07:07.793 [2024-05-14 11:43:34.627709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.793 [2024-05-14 11:43:34.627746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.793 [2024-05-14 11:43:34.627809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.793 [2024-05-14 11:43:34.627824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.793 [2024-05-14 11:43:34.627876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.793 [2024-05-14 11:43:34.627892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.793 [2024-05-14 11:43:34.627945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:07.793 [2024-05-14 11:43:34.627959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.793 #43 NEW cov: 12109 ft: 15350 corp: 33/1038b lim: 50 exec/s: 43 rss: 73Mb L: 47/49 MS: 1 ChangeBinInt- 00:07:07.793 [2024-05-14 11:43:34.667494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.793 [2024-05-14 11:43:34.667522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.793 [2024-05-14 11:43:34.667554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.793 [2024-05-14 11:43:34.667569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.793 #44 NEW cov: 12109 ft: 15370 corp: 34/1061b lim: 50 exec/s: 44 rss: 73Mb L: 23/49 MS: 1 ChangeBinInt- 00:07:07.793 [2024-05-14 11:43:34.707912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.793 [2024-05-14 11:43:34.707939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.793 [2024-05-14 11:43:34.707986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.793 [2024-05-14 11:43:34.708001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.793 [2024-05-14 11:43:34.708054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.793 [2024-05-14 11:43:34.708069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.793 [2024-05-14 11:43:34.708121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:07.793 [2024-05-14 11:43:34.708136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.793 #45 NEW cov: 12109 ft: 15375 corp: 35/1108b lim: 50 exec/s: 45 rss: 73Mb L: 47/49 MS: 1 ShuffleBytes- 00:07:07.793 [2024-05-14 11:43:34.757634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.793 [2024-05-14 11:43:34.757660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.793 #46 NEW cov: 12109 ft: 15379 corp: 36/1119b lim: 50 exec/s: 46 rss: 73Mb L: 11/49 MS: 1 ChangeBinInt- 00:07:07.793 [2024-05-14 11:43:34.798171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.793 [2024-05-14 11:43:34.798198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.793 [2024-05-14 11:43:34.798245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.793 [2024-05-14 11:43:34.798261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.793 [2024-05-14 11:43:34.798311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.793 [2024-05-14 11:43:34.798326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.793 [2024-05-14 11:43:34.798384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:07.793 [2024-05-14 11:43:34.798399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:07.793 #47 NEW cov: 12109 ft: 15381 corp: 37/1166b lim: 50 exec/s: 47 rss: 73Mb L: 47/49 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:07.793 [2024-05-14 11:43:34.848169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:07.793 [2024-05-14 11:43:34.848195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:07.793 [2024-05-14 11:43:34.848235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:07.793 [2024-05-14 11:43:34.848250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:07.793 [2024-05-14 11:43:34.848301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:07.793 [2024-05-14 11:43:34.848315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:07.793 #48 NEW cov: 12109 ft: 15391 corp: 38/1202b lim: 50 exec/s: 48 rss: 73Mb L: 36/49 MS: 1 EraseBytes- 00:07:08.052 [2024-05-14 11:43:34.898061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:08.052 [2024-05-14 11:43:34.898088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.052 #49 NEW cov: 12109 ft: 15413 corp: 39/1219b lim: 50 exec/s: 49 rss: 73Mb L: 17/49 MS: 1 ChangeBit- 00:07:08.052 [2024-05-14 11:43:34.938609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:07:08.052 [2024-05-14 11:43:34.938637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.052 [2024-05-14 11:43:34.938678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:07:08.052 [2024-05-14 11:43:34.938694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:08.052 [2024-05-14 11:43:34.938747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:07:08.052 [2024-05-14 11:43:34.938762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:08.052 [2024-05-14 11:43:34.938815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:07:08.052 [2024-05-14 11:43:34.938830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:08.052 #50 NEW cov: 12109 ft: 15421 corp: 40/1266b lim: 50 exec/s: 25 rss: 73Mb L: 47/49 MS: 1 ShuffleBytes- 00:07:08.052 #50 DONE cov: 12109 ft: 15421 corp: 40/1266b lim: 50 exec/s: 25 rss: 73Mb 00:07:08.052 ###### Recommended dictionary. ###### 00:07:08.052 "\001\205=d\223\3773L" # Uses: 1 00:07:08.052 "\001\000\000\000\000\000\000\000" # Uses: 0 00:07:08.052 ###### End of recommended dictionary. ###### 00:07:08.052 Done 50 runs in 2 second(s) 00:07:08.052 [2024-05-14 11:43:34.967836] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:08.052 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:07:08.052 11:43:35 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:08.052 11:43:35 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:08.052 11:43:35 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:07:08.052 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:07:08.052 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:08.053 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:08.053 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:08.053 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:07:08.053 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:08.053 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:08.053 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:07:08.053 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4422 00:07:08.053 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:08.053 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:07:08.053 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:08.053 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:08.053 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:08.053 11:43:35 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:07:08.053 [2024-05-14 11:43:35.135661] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:07:08.053 [2024-05-14 11:43:35.135732] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642814 ] 00:07:08.312 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.312 [2024-05-14 11:43:35.311330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.312 [2024-05-14 11:43:35.377176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.570 [2024-05-14 11:43:35.436432] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.570 [2024-05-14 11:43:35.452390] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:08.570 [2024-05-14 11:43:35.452796] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:07:08.570 INFO: Running with entropic power schedule (0xFF, 100). 00:07:08.570 INFO: Seed: 3179804109 00:07:08.570 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:07:08.570 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:07:08.570 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:07:08.570 INFO: A corpus is not provided, starting from an empty corpus 00:07:08.570 #2 INITED exec/s: 0 rss: 63Mb 00:07:08.570 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:08.570 This may also happen if the target rejected all inputs we tried so far 00:07:08.570 [2024-05-14 11:43:35.528706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.570 [2024-05-14 11:43:35.528747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.828 NEW_FUNC[1/687]: 0x4a94b0 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:07:08.828 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:08.828 #5 NEW cov: 11891 ft: 11889 corp: 2/20b lim: 85 exec/s: 0 rss: 70Mb L: 19/19 MS: 3 CrossOver-ChangeByte-InsertRepeatedBytes- 00:07:08.828 [2024-05-14 11:43:35.869307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:08.828 [2024-05-14 11:43:35.869350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:08.828 #6 NEW cov: 12021 ft: 12674 corp: 3/39b lim: 85 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 CopyPart- 00:07:09.087 [2024-05-14 11:43:35.919463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.087 [2024-05-14 11:43:35.919488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.087 #7 NEW cov: 12027 ft: 12996 corp: 4/58b lim: 85 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 CrossOver- 00:07:09.087 [2024-05-14 11:43:35.959396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.087 [2024-05-14 11:43:35.959431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.087 #13 NEW cov: 12112 ft: 13274 corp: 5/77b lim: 85 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 ChangeByte- 00:07:09.087 [2024-05-14 11:43:35.999906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.087 [2024-05-14 11:43:35.999935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.087 [2024-05-14 11:43:36.000062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.087 [2024-05-14 11:43:36.000089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.087 #14 NEW cov: 12112 ft: 14124 corp: 6/117b lim: 85 exec/s: 0 rss: 70Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:09.087 [2024-05-14 11:43:36.049683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.087 [2024-05-14 11:43:36.049708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.087 #15 NEW cov: 12112 ft: 14187 corp: 7/137b lim: 85 exec/s: 0 rss: 71Mb L: 20/40 MS: 1 InsertByte- 00:07:09.087 [2024-05-14 11:43:36.090331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.087 [2024-05-14 11:43:36.090363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.087 [2024-05-14 11:43:36.090486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.087 [2024-05-14 11:43:36.090510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.087 [2024-05-14 11:43:36.090634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:09.087 [2024-05-14 11:43:36.090660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.087 #21 NEW cov: 12112 ft: 14608 corp: 8/193b lim: 85 exec/s: 0 rss: 71Mb L: 56/56 MS: 1 InsertRepeatedBytes- 00:07:09.087 [2024-05-14 11:43:36.130029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.087 [2024-05-14 11:43:36.130054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.087 #22 NEW cov: 12112 ft: 14628 corp: 9/212b lim: 85 exec/s: 0 rss: 71Mb L: 19/56 MS: 1 ChangeBit- 00:07:09.344 [2024-05-14 11:43:36.180126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.344 [2024-05-14 11:43:36.180154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.344 #28 NEW cov: 12112 ft: 14665 corp: 10/230b lim: 85 exec/s: 0 rss: 71Mb L: 18/56 MS: 1 EraseBytes- 00:07:09.344 [2024-05-14 11:43:36.220741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.344 [2024-05-14 11:43:36.220772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.344 [2024-05-14 11:43:36.220863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.344 [2024-05-14 11:43:36.220888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.344 [2024-05-14 11:43:36.221013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:09.344 [2024-05-14 11:43:36.221033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.344 #29 NEW cov: 12112 ft: 14727 corp: 11/295b lim: 85 exec/s: 0 rss: 71Mb L: 65/65 MS: 1 CopyPart- 00:07:09.344 [2024-05-14 11:43:36.270383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.344 [2024-05-14 11:43:36.270410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.344 #30 NEW cov: 12112 ft: 14757 corp: 12/314b lim: 85 exec/s: 0 rss: 71Mb L: 19/65 MS: 1 CMP- DE: "\000\205=e\371|\324`"- 00:07:09.344 [2024-05-14 11:43:36.310829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.344 [2024-05-14 11:43:36.310865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.344 [2024-05-14 11:43:36.310978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.344 [2024-05-14 11:43:36.311000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.344 #31 NEW cov: 12112 ft: 14770 corp: 13/354b lim: 85 exec/s: 0 rss: 71Mb L: 40/65 MS: 1 ChangeBinInt- 00:07:09.344 [2024-05-14 11:43:36.351176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.344 [2024-05-14 11:43:36.351207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.344 [2024-05-14 11:43:36.351323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.344 [2024-05-14 11:43:36.351347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.344 [2024-05-14 11:43:36.351480] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:09.344 [2024-05-14 11:43:36.351501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.344 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:09.344 #32 NEW cov: 12135 ft: 14788 corp: 14/410b lim: 85 exec/s: 0 rss: 71Mb L: 56/65 MS: 1 PersAutoDict- DE: "\000\205=e\371|\324`"- 00:07:09.344 [2024-05-14 11:43:36.401538] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.344 [2024-05-14 11:43:36.401573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.344 [2024-05-14 11:43:36.401675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.344 [2024-05-14 11:43:36.401706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.344 [2024-05-14 11:43:36.401819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:09.344 [2024-05-14 11:43:36.401840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.344 [2024-05-14 11:43:36.401954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:09.344 [2024-05-14 11:43:36.401982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.344 #33 NEW cov: 12135 ft: 15167 corp: 15/486b lim: 85 exec/s: 0 rss: 71Mb L: 76/76 MS: 1 CopyPart- 00:07:09.602 [2024-05-14 11:43:36.440922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.602 [2024-05-14 11:43:36.440948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.602 #34 NEW cov: 12135 ft: 15195 corp: 16/505b lim: 85 exec/s: 0 rss: 71Mb L: 19/76 MS: 1 ChangeByte- 00:07:09.602 [2024-05-14 11:43:36.481054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.602 [2024-05-14 11:43:36.481082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.602 #38 NEW cov: 12135 ft: 15206 corp: 17/535b lim: 85 exec/s: 38 rss: 72Mb L: 30/76 MS: 4 EraseBytes-InsertByte-ChangeBinInt-CrossOver- 00:07:09.602 [2024-05-14 11:43:36.531224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.602 [2024-05-14 11:43:36.531253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.602 #39 NEW cov: 12135 ft: 15225 corp: 18/554b lim: 85 exec/s: 39 rss: 72Mb L: 19/76 MS: 1 ChangeBit- 00:07:09.602 [2024-05-14 11:43:36.571234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.602 [2024-05-14 11:43:36.571263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.602 #40 NEW cov: 12135 ft: 15236 corp: 19/572b lim: 85 exec/s: 40 rss: 72Mb L: 18/76 MS: 1 ChangeByte- 00:07:09.602 [2024-05-14 11:43:36.611425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.602 [2024-05-14 11:43:36.611458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.602 #41 NEW cov: 12135 ft: 15244 corp: 20/591b lim: 85 exec/s: 41 rss: 72Mb L: 19/76 MS: 1 ChangeByte- 00:07:09.602 [2024-05-14 11:43:36.651587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.602 [2024-05-14 11:43:36.651613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.602 #42 NEW cov: 12135 ft: 15281 corp: 21/611b lim: 85 exec/s: 42 rss: 72Mb L: 20/76 MS: 1 InsertByte- 00:07:09.602 [2024-05-14 11:43:36.691684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.602 [2024-05-14 11:43:36.691717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.859 #43 NEW cov: 12135 ft: 15287 corp: 22/631b lim: 85 exec/s: 43 rss: 72Mb L: 20/76 MS: 1 PersAutoDict- DE: "\000\205=e\371|\324`"- 00:07:09.859 [2024-05-14 11:43:36.741946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.859 [2024-05-14 11:43:36.741975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.859 #44 NEW cov: 12135 ft: 15331 corp: 23/650b lim: 85 exec/s: 44 rss: 72Mb L: 19/76 MS: 1 ChangeBit- 00:07:09.859 [2024-05-14 11:43:36.781919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.859 [2024-05-14 11:43:36.781947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.860 #50 NEW cov: 12135 ft: 15348 corp: 24/668b lim: 85 exec/s: 50 rss: 72Mb L: 18/76 MS: 1 ChangeBinInt- 00:07:09.860 [2024-05-14 11:43:36.822874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.860 [2024-05-14 11:43:36.822909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.860 [2024-05-14 11:43:36.823015] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.860 [2024-05-14 11:43:36.823038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.860 [2024-05-14 11:43:36.823151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:09.860 [2024-05-14 11:43:36.823174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.860 [2024-05-14 11:43:36.823293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:09.860 [2024-05-14 11:43:36.823317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.860 #51 NEW cov: 12135 ft: 15356 corp: 25/744b lim: 85 exec/s: 51 rss: 72Mb L: 76/76 MS: 1 ChangeByte- 00:07:09.860 [2024-05-14 11:43:36.872246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.860 [2024-05-14 11:43:36.872278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.860 #52 NEW cov: 12135 ft: 15359 corp: 26/763b lim: 85 exec/s: 52 rss: 72Mb L: 19/76 MS: 1 ChangeByte- 00:07:09.860 [2024-05-14 11:43:36.913129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:09.860 [2024-05-14 11:43:36.913161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:09.860 [2024-05-14 11:43:36.913228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:09.860 [2024-05-14 11:43:36.913249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:09.860 [2024-05-14 11:43:36.913357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:09.860 [2024-05-14 11:43:36.913386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:09.860 [2024-05-14 11:43:36.913496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:09.860 [2024-05-14 11:43:36.913517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:09.860 #58 NEW cov: 12135 ft: 15374 corp: 27/835b lim: 85 exec/s: 58 rss: 72Mb L: 72/76 MS: 1 InsertRepeatedBytes- 00:07:10.117 [2024-05-14 11:43:36.962535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.117 [2024-05-14 11:43:36.962567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.117 #59 NEW cov: 12135 ft: 15378 corp: 28/857b lim: 85 exec/s: 59 rss: 72Mb L: 22/76 MS: 1 EraseBytes- 00:07:10.117 [2024-05-14 11:43:37.003104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.117 [2024-05-14 11:43:37.003140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.117 [2024-05-14 11:43:37.003234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:10.117 [2024-05-14 11:43:37.003257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.117 [2024-05-14 11:43:37.003383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:10.117 [2024-05-14 11:43:37.003408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.117 #60 NEW cov: 12135 ft: 15386 corp: 29/921b lim: 85 exec/s: 60 rss: 72Mb L: 64/76 MS: 1 InsertRepeatedBytes- 00:07:10.117 [2024-05-14 11:43:37.043319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.117 [2024-05-14 11:43:37.043350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.117 [2024-05-14 11:43:37.043451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:10.117 [2024-05-14 11:43:37.043476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.117 [2024-05-14 11:43:37.043592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:10.117 [2024-05-14 11:43:37.043613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.117 #61 NEW cov: 12135 ft: 15421 corp: 30/980b lim: 85 exec/s: 61 rss: 72Mb L: 59/76 MS: 1 InsertRepeatedBytes- 00:07:10.117 [2024-05-14 11:43:37.082900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.117 [2024-05-14 11:43:37.082929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.117 #62 NEW cov: 12135 ft: 15448 corp: 31/999b lim: 85 exec/s: 62 rss: 72Mb L: 19/76 MS: 1 PersAutoDict- DE: "\000\205=e\371|\324`"- 00:07:10.117 [2024-05-14 11:43:37.123742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.117 [2024-05-14 11:43:37.123773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.117 [2024-05-14 11:43:37.123859] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:10.117 [2024-05-14 11:43:37.123879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.117 [2024-05-14 11:43:37.124000] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:10.117 [2024-05-14 11:43:37.124024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.117 [2024-05-14 11:43:37.124144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:10.117 [2024-05-14 11:43:37.124170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.117 #63 NEW cov: 12135 ft: 15460 corp: 32/1074b lim: 85 exec/s: 63 rss: 72Mb L: 75/76 MS: 1 CrossOver- 00:07:10.117 [2024-05-14 11:43:37.173129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.117 [2024-05-14 11:43:37.173157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.117 #64 NEW cov: 12135 ft: 15465 corp: 33/1092b lim: 85 exec/s: 64 rss: 72Mb L: 18/76 MS: 1 CMP- DE: "\376\003\000\000\000\000\000\000"- 00:07:10.375 [2024-05-14 11:43:37.213957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.375 [2024-05-14 11:43:37.213991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.375 [2024-05-14 11:43:37.214076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:10.375 [2024-05-14 11:43:37.214102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.375 [2024-05-14 11:43:37.214222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:10.375 [2024-05-14 11:43:37.214245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.375 [2024-05-14 11:43:37.214364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:10.375 [2024-05-14 11:43:37.214385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.375 #65 NEW cov: 12135 ft: 15470 corp: 34/1172b lim: 85 exec/s: 65 rss: 73Mb L: 80/80 MS: 1 CopyPart- 00:07:10.375 [2024-05-14 11:43:37.264093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.375 [2024-05-14 11:43:37.264126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.375 [2024-05-14 11:43:37.264223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:10.375 [2024-05-14 11:43:37.264250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.375 [2024-05-14 11:43:37.264370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:10.375 [2024-05-14 11:43:37.264396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.375 [2024-05-14 11:43:37.264519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:10.375 [2024-05-14 11:43:37.264542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.375 #66 NEW cov: 12135 ft: 15485 corp: 35/1252b lim: 85 exec/s: 66 rss: 73Mb L: 80/80 MS: 1 CrossOver- 00:07:10.375 [2024-05-14 11:43:37.313409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.375 [2024-05-14 11:43:37.313442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.375 #67 NEW cov: 12135 ft: 15487 corp: 36/1272b lim: 85 exec/s: 67 rss: 73Mb L: 20/80 MS: 1 PersAutoDict- DE: "\376\003\000\000\000\000\000\000"- 00:07:10.375 [2024-05-14 11:43:37.353583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.375 [2024-05-14 11:43:37.353610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.375 #68 NEW cov: 12135 ft: 15489 corp: 37/1291b lim: 85 exec/s: 68 rss: 73Mb L: 19/80 MS: 1 CopyPart- 00:07:10.375 [2024-05-14 11:43:37.393717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.375 [2024-05-14 11:43:37.393749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.375 #69 NEW cov: 12135 ft: 15521 corp: 38/1310b lim: 85 exec/s: 69 rss: 73Mb L: 19/80 MS: 1 ShuffleBytes- 00:07:10.375 [2024-05-14 11:43:37.433829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.375 [2024-05-14 11:43:37.433860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.375 #70 NEW cov: 12135 ft: 15523 corp: 39/1329b lim: 85 exec/s: 70 rss: 73Mb L: 19/80 MS: 1 ShuffleBytes- 00:07:10.634 [2024-05-14 11:43:37.473852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.634 [2024-05-14 11:43:37.473883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.634 #71 NEW cov: 12135 ft: 15596 corp: 40/1348b lim: 85 exec/s: 71 rss: 73Mb L: 19/80 MS: 1 ChangeByte- 00:07:10.634 [2024-05-14 11:43:37.514629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:07:10.634 [2024-05-14 11:43:37.514663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:10.634 [2024-05-14 11:43:37.514741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:07:10.634 [2024-05-14 11:43:37.514765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:10.634 [2024-05-14 11:43:37.514889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:07:10.634 [2024-05-14 11:43:37.514914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:10.634 [2024-05-14 11:43:37.515046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:07:10.634 [2024-05-14 11:43:37.515070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:10.634 #72 NEW cov: 12135 ft: 15600 corp: 41/1424b lim: 85 exec/s: 36 rss: 73Mb L: 76/80 MS: 1 InsertByte- 00:07:10.634 #72 DONE cov: 12135 ft: 15600 corp: 41/1424b lim: 85 exec/s: 36 rss: 73Mb 00:07:10.634 ###### Recommended dictionary. ###### 00:07:10.634 "\000\205=e\371|\324`" # Uses: 5 00:07:10.634 "\376\003\000\000\000\000\000\000" # Uses: 1 00:07:10.634 ###### End of recommended dictionary. ###### 00:07:10.634 Done 72 runs in 2 second(s) 00:07:10.634 [2024-05-14 11:43:37.545628] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4423 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:10.634 11:43:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:07:10.634 [2024-05-14 11:43:37.709897] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:07:10.634 [2024-05-14 11:43:37.709982] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643352 ] 00:07:10.892 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.892 [2024-05-14 11:43:37.887533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.892 [2024-05-14 11:43:37.952794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.150 [2024-05-14 11:43:38.011941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.150 [2024-05-14 11:43:38.027894] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:11.150 [2024-05-14 11:43:38.028267] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:07:11.150 INFO: Running with entropic power schedule (0xFF, 100). 00:07:11.150 INFO: Seed: 1457833601 00:07:11.150 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:07:11.150 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:07:11.150 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:07:11.150 INFO: A corpus is not provided, starting from an empty corpus 00:07:11.151 #2 INITED exec/s: 0 rss: 64Mb 00:07:11.151 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:11.151 This may also happen if the target rejected all inputs we tried so far 00:07:11.151 [2024-05-14 11:43:38.076906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.151 [2024-05-14 11:43:38.076936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.151 [2024-05-14 11:43:38.076969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.151 [2024-05-14 11:43:38.076984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.151 [2024-05-14 11:43:38.077038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.151 [2024-05-14 11:43:38.077055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.409 NEW_FUNC[1/686]: 0x4ac6e0 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:11.409 NEW_FUNC[2/686]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:11.409 #4 NEW cov: 11823 ft: 11817 corp: 2/18b lim: 25 exec/s: 0 rss: 70Mb L: 17/17 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:11.409 [2024-05-14 11:43:38.387655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.409 [2024-05-14 11:43:38.387698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.409 [2024-05-14 11:43:38.387760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.409 [2024-05-14 11:43:38.387780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.409 [2024-05-14 11:43:38.387839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.409 [2024-05-14 11:43:38.387859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.409 #5 NEW cov: 11954 ft: 12445 corp: 3/34b lim: 25 exec/s: 0 rss: 70Mb L: 16/17 MS: 1 EraseBytes- 00:07:11.409 [2024-05-14 11:43:38.437715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.409 [2024-05-14 11:43:38.437744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.409 [2024-05-14 11:43:38.437778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.409 [2024-05-14 11:43:38.437791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.409 [2024-05-14 11:43:38.437846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.409 [2024-05-14 11:43:38.437861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.409 #11 NEW cov: 11960 ft: 12729 corp: 4/53b lim: 25 exec/s: 0 rss: 70Mb L: 19/19 MS: 1 CrossOver- 00:07:11.409 [2024-05-14 11:43:38.477893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.409 [2024-05-14 11:43:38.477920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.410 [2024-05-14 11:43:38.477968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.410 [2024-05-14 11:43:38.477981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.410 [2024-05-14 11:43:38.478036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.410 [2024-05-14 11:43:38.478051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.410 [2024-05-14 11:43:38.478109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.410 [2024-05-14 11:43:38.478124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.669 #12 NEW cov: 12045 ft: 13454 corp: 5/74b lim: 25 exec/s: 0 rss: 70Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:07:11.669 [2024-05-14 11:43:38.518018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.669 [2024-05-14 11:43:38.518045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.518091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.669 [2024-05-14 11:43:38.518107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.518157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.669 [2024-05-14 11:43:38.518171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.518223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.669 [2024-05-14 11:43:38.518236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.669 #13 NEW cov: 12045 ft: 13518 corp: 6/95b lim: 25 exec/s: 0 rss: 70Mb L: 21/21 MS: 1 ChangeByte- 00:07:11.669 [2024-05-14 11:43:38.568035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.669 [2024-05-14 11:43:38.568063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.568097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.669 [2024-05-14 11:43:38.568112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.568165] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.669 [2024-05-14 11:43:38.568182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.669 #14 NEW cov: 12045 ft: 13643 corp: 7/114b lim: 25 exec/s: 0 rss: 71Mb L: 19/21 MS: 1 ChangeByte- 00:07:11.669 [2024-05-14 11:43:38.608395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.669 [2024-05-14 11:43:38.608424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.608475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.669 [2024-05-14 11:43:38.608488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.608542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.669 [2024-05-14 11:43:38.608557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.608612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.669 [2024-05-14 11:43:38.608626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.608679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:11.669 [2024-05-14 11:43:38.608697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:11.669 #20 NEW cov: 12045 ft: 13768 corp: 8/139b lim: 25 exec/s: 0 rss: 71Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:11.669 [2024-05-14 11:43:38.648337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.669 [2024-05-14 11:43:38.648365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.648422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.669 [2024-05-14 11:43:38.648438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.648491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.669 [2024-05-14 11:43:38.648506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.669 #21 NEW cov: 12045 ft: 13789 corp: 9/157b lim: 25 exec/s: 0 rss: 71Mb L: 18/25 MS: 1 InsertByte- 00:07:11.669 [2024-05-14 11:43:38.688528] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.669 [2024-05-14 11:43:38.688557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.688601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.669 [2024-05-14 11:43:38.688616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.688669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.669 [2024-05-14 11:43:38.688684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.688735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.669 [2024-05-14 11:43:38.688749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.669 #22 NEW cov: 12045 ft: 13811 corp: 10/179b lim: 25 exec/s: 0 rss: 71Mb L: 22/25 MS: 1 InsertByte- 00:07:11.669 [2024-05-14 11:43:38.738530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.669 [2024-05-14 11:43:38.738559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.738598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.669 [2024-05-14 11:43:38.738613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.669 [2024-05-14 11:43:38.738666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.669 [2024-05-14 11:43:38.738681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.928 #23 NEW cov: 12045 ft: 13897 corp: 11/198b lim: 25 exec/s: 0 rss: 71Mb L: 19/25 MS: 1 ChangeByte- 00:07:11.928 [2024-05-14 11:43:38.778551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.928 [2024-05-14 11:43:38.778578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.928 [2024-05-14 11:43:38.778629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.928 [2024-05-14 11:43:38.778646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.928 #24 NEW cov: 12045 ft: 14210 corp: 12/211b lim: 25 exec/s: 0 rss: 71Mb L: 13/25 MS: 1 EraseBytes- 00:07:11.928 [2024-05-14 11:43:38.818991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.928 [2024-05-14 11:43:38.819018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.928 [2024-05-14 11:43:38.819073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.928 [2024-05-14 11:43:38.819087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.928 [2024-05-14 11:43:38.819135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.928 [2024-05-14 11:43:38.819150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.928 [2024-05-14 11:43:38.819199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:11.928 [2024-05-14 11:43:38.819214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:11.928 [2024-05-14 11:43:38.819265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:11.928 [2024-05-14 11:43:38.819280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:11.928 #25 NEW cov: 12045 ft: 14222 corp: 13/236b lim: 25 exec/s: 0 rss: 71Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:11.928 [2024-05-14 11:43:38.868878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.928 [2024-05-14 11:43:38.868905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.928 [2024-05-14 11:43:38.868945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.928 [2024-05-14 11:43:38.868961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.928 [2024-05-14 11:43:38.869014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.928 [2024-05-14 11:43:38.869027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.928 #26 NEW cov: 12045 ft: 14259 corp: 14/255b lim: 25 exec/s: 0 rss: 71Mb L: 19/25 MS: 1 ChangeByte- 00:07:11.928 [2024-05-14 11:43:38.909007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.928 [2024-05-14 11:43:38.909033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.928 [2024-05-14 11:43:38.909068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.928 [2024-05-14 11:43:38.909084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.928 [2024-05-14 11:43:38.909136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.928 [2024-05-14 11:43:38.909167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.928 #27 NEW cov: 12045 ft: 14262 corp: 15/273b lim: 25 exec/s: 0 rss: 71Mb L: 18/25 MS: 1 CopyPart- 00:07:11.928 [2024-05-14 11:43:38.949099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.928 [2024-05-14 11:43:38.949125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.928 [2024-05-14 11:43:38.949170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.929 [2024-05-14 11:43:38.949185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.929 [2024-05-14 11:43:38.949244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.929 [2024-05-14 11:43:38.949260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:11.929 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:11.929 #28 NEW cov: 12068 ft: 14305 corp: 16/291b lim: 25 exec/s: 0 rss: 72Mb L: 18/25 MS: 1 CrossOver- 00:07:11.929 [2024-05-14 11:43:38.999263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:11.929 [2024-05-14 11:43:38.999291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:11.929 [2024-05-14 11:43:38.999349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:11.929 [2024-05-14 11:43:38.999365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:11.929 [2024-05-14 11:43:38.999422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:11.929 [2024-05-14 11:43:38.999437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.187 #29 NEW cov: 12068 ft: 14332 corp: 17/309b lim: 25 exec/s: 0 rss: 72Mb L: 18/25 MS: 1 CrossOver- 00:07:12.187 [2024-05-14 11:43:39.039441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.187 [2024-05-14 11:43:39.039468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.187 [2024-05-14 11:43:39.039526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.187 [2024-05-14 11:43:39.039542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.187 [2024-05-14 11:43:39.039593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.187 [2024-05-14 11:43:39.039608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.187 #30 NEW cov: 12068 ft: 14384 corp: 18/327b lim: 25 exec/s: 30 rss: 72Mb L: 18/25 MS: 1 ChangeBit- 00:07:12.187 [2024-05-14 11:43:39.079503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.187 [2024-05-14 11:43:39.079529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.187 [2024-05-14 11:43:39.079570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.188 [2024-05-14 11:43:39.079585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.188 [2024-05-14 11:43:39.079640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.188 [2024-05-14 11:43:39.079655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.188 #31 NEW cov: 12068 ft: 14411 corp: 19/343b lim: 25 exec/s: 31 rss: 72Mb L: 16/25 MS: 1 ChangeBit- 00:07:12.188 [2024-05-14 11:43:39.119616] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.188 [2024-05-14 11:43:39.119643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.188 [2024-05-14 11:43:39.119676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.188 [2024-05-14 11:43:39.119690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.188 [2024-05-14 11:43:39.119746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.188 [2024-05-14 11:43:39.119761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.188 #32 NEW cov: 12068 ft: 14417 corp: 20/361b lim: 25 exec/s: 32 rss: 72Mb L: 18/25 MS: 1 ChangeByte- 00:07:12.188 [2024-05-14 11:43:39.159757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.188 [2024-05-14 11:43:39.159785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.188 [2024-05-14 11:43:39.159818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.188 [2024-05-14 11:43:39.159832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.188 [2024-05-14 11:43:39.159887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.188 [2024-05-14 11:43:39.159901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.188 #33 NEW cov: 12068 ft: 14437 corp: 21/378b lim: 25 exec/s: 33 rss: 72Mb L: 17/25 MS: 1 ShuffleBytes- 00:07:12.188 [2024-05-14 11:43:39.199697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.188 [2024-05-14 11:43:39.199724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.188 [2024-05-14 11:43:39.199758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.188 [2024-05-14 11:43:39.199774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.188 #34 NEW cov: 12068 ft: 14471 corp: 22/391b lim: 25 exec/s: 34 rss: 72Mb L: 13/25 MS: 1 ChangeBinInt- 00:07:12.188 [2024-05-14 11:43:39.240169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.188 [2024-05-14 11:43:39.240196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.188 [2024-05-14 11:43:39.240249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.188 [2024-05-14 11:43:39.240263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.188 [2024-05-14 11:43:39.240317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.188 [2024-05-14 11:43:39.240334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.188 [2024-05-14 11:43:39.240388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:12.188 [2024-05-14 11:43:39.240403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.188 [2024-05-14 11:43:39.240457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:12.188 [2024-05-14 11:43:39.240472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:12.188 #35 NEW cov: 12068 ft: 14487 corp: 23/416b lim: 25 exec/s: 35 rss: 72Mb L: 25/25 MS: 1 ChangeBit- 00:07:12.446 [2024-05-14 11:43:39.290100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.446 [2024-05-14 11:43:39.290128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.446 [2024-05-14 11:43:39.290182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.446 [2024-05-14 11:43:39.290200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.447 [2024-05-14 11:43:39.290255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.447 [2024-05-14 11:43:39.290268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.447 #36 NEW cov: 12068 ft: 14498 corp: 24/435b lim: 25 exec/s: 36 rss: 72Mb L: 19/25 MS: 1 ChangeByte- 00:07:12.447 [2024-05-14 11:43:39.330192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.447 [2024-05-14 11:43:39.330220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.447 [2024-05-14 11:43:39.330282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.447 [2024-05-14 11:43:39.330298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.447 [2024-05-14 11:43:39.330349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.447 [2024-05-14 11:43:39.330364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.447 #37 NEW cov: 12068 ft: 14512 corp: 25/454b lim: 25 exec/s: 37 rss: 72Mb L: 19/25 MS: 1 ShuffleBytes- 00:07:12.447 [2024-05-14 11:43:39.370173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.447 [2024-05-14 11:43:39.370200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.447 [2024-05-14 11:43:39.370235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.447 [2024-05-14 11:43:39.370251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.447 #38 NEW cov: 12068 ft: 14549 corp: 26/467b lim: 25 exec/s: 38 rss: 72Mb L: 13/25 MS: 1 ChangeBinInt- 00:07:12.447 [2024-05-14 11:43:39.420203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.447 [2024-05-14 11:43:39.420229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.447 #39 NEW cov: 12068 ft: 14926 corp: 27/475b lim: 25 exec/s: 39 rss: 72Mb L: 8/25 MS: 1 CrossOver- 00:07:12.447 [2024-05-14 11:43:39.460553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.447 [2024-05-14 11:43:39.460580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.447 [2024-05-14 11:43:39.460620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.447 [2024-05-14 11:43:39.460637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.447 [2024-05-14 11:43:39.460692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.447 [2024-05-14 11:43:39.460708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.447 #40 NEW cov: 12068 ft: 14994 corp: 28/494b lim: 25 exec/s: 40 rss: 72Mb L: 19/25 MS: 1 ChangeByte- 00:07:12.447 [2024-05-14 11:43:39.500669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.447 [2024-05-14 11:43:39.500697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.447 [2024-05-14 11:43:39.500731] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.447 [2024-05-14 11:43:39.500750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.447 [2024-05-14 11:43:39.500804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.447 [2024-05-14 11:43:39.500818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.447 #41 NEW cov: 12068 ft: 15003 corp: 29/512b lim: 25 exec/s: 41 rss: 72Mb L: 18/25 MS: 1 EraseBytes- 00:07:12.706 [2024-05-14 11:43:39.540584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.706 [2024-05-14 11:43:39.540611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.706 #42 NEW cov: 12068 ft: 15011 corp: 30/520b lim: 25 exec/s: 42 rss: 72Mb L: 8/25 MS: 1 CrossOver- 00:07:12.706 [2024-05-14 11:43:39.580802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.706 [2024-05-14 11:43:39.580828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.706 [2024-05-14 11:43:39.580863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.706 [2024-05-14 11:43:39.580878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.706 #43 NEW cov: 12068 ft: 15049 corp: 31/532b lim: 25 exec/s: 43 rss: 72Mb L: 12/25 MS: 1 EraseBytes- 00:07:12.706 [2024-05-14 11:43:39.620896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.706 [2024-05-14 11:43:39.620923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.706 [2024-05-14 11:43:39.620957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.706 [2024-05-14 11:43:39.620971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.706 #44 NEW cov: 12068 ft: 15054 corp: 32/546b lim: 25 exec/s: 44 rss: 72Mb L: 14/25 MS: 1 CrossOver- 00:07:12.706 [2024-05-14 11:43:39.661155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.706 [2024-05-14 11:43:39.661182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.706 [2024-05-14 11:43:39.661224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.706 [2024-05-14 11:43:39.661239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.706 [2024-05-14 11:43:39.661295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.706 [2024-05-14 11:43:39.661310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.706 #45 NEW cov: 12068 ft: 15060 corp: 33/563b lim: 25 exec/s: 45 rss: 73Mb L: 17/25 MS: 1 InsertByte- 00:07:12.706 [2024-05-14 11:43:39.701256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.706 [2024-05-14 11:43:39.701283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.706 [2024-05-14 11:43:39.701327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.706 [2024-05-14 11:43:39.701343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.706 [2024-05-14 11:43:39.701400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.706 [2024-05-14 11:43:39.701431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.706 #46 NEW cov: 12068 ft: 15073 corp: 34/582b lim: 25 exec/s: 46 rss: 73Mb L: 19/25 MS: 1 CopyPart- 00:07:12.706 [2024-05-14 11:43:39.741624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.706 [2024-05-14 11:43:39.741652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.706 [2024-05-14 11:43:39.741702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.706 [2024-05-14 11:43:39.741717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.706 [2024-05-14 11:43:39.741769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.706 [2024-05-14 11:43:39.741785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.706 [2024-05-14 11:43:39.741839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:12.706 [2024-05-14 11:43:39.741854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.706 [2024-05-14 11:43:39.741907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:12.706 [2024-05-14 11:43:39.741921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:12.706 #47 NEW cov: 12068 ft: 15084 corp: 35/607b lim: 25 exec/s: 47 rss: 73Mb L: 25/25 MS: 1 CopyPart- 00:07:12.707 [2024-05-14 11:43:39.791592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.707 [2024-05-14 11:43:39.791621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.707 [2024-05-14 11:43:39.791655] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.707 [2024-05-14 11:43:39.791670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.707 [2024-05-14 11:43:39.791725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.707 [2024-05-14 11:43:39.791741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.965 #48 NEW cov: 12068 ft: 15087 corp: 36/626b lim: 25 exec/s: 48 rss: 73Mb L: 19/25 MS: 1 ChangeBinInt- 00:07:12.965 [2024-05-14 11:43:39.831550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.965 [2024-05-14 11:43:39.831579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.965 [2024-05-14 11:43:39.831628] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.965 [2024-05-14 11:43:39.831643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.965 #49 NEW cov: 12068 ft: 15092 corp: 37/639b lim: 25 exec/s: 49 rss: 73Mb L: 13/25 MS: 1 ShuffleBytes- 00:07:12.965 [2024-05-14 11:43:39.871611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.965 [2024-05-14 11:43:39.871638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.965 [2024-05-14 11:43:39.871687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.965 [2024-05-14 11:43:39.871701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.965 #50 NEW cov: 12068 ft: 15101 corp: 38/652b lim: 25 exec/s: 50 rss: 73Mb L: 13/25 MS: 1 ChangeByte- 00:07:12.965 [2024-05-14 11:43:39.911670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.965 [2024-05-14 11:43:39.911697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.965 [2024-05-14 11:43:39.951783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.965 [2024-05-14 11:43:39.951810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.965 #52 NEW cov: 12068 ft: 15132 corp: 39/660b lim: 25 exec/s: 52 rss: 73Mb L: 8/25 MS: 2 ChangeBinInt-ChangeBit- 00:07:12.965 [2024-05-14 11:43:39.992126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.965 [2024-05-14 11:43:39.992153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.965 [2024-05-14 11:43:39.992186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.965 [2024-05-14 11:43:39.992201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.965 [2024-05-14 11:43:39.992253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.965 [2024-05-14 11:43:39.992269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.965 #53 NEW cov: 12068 ft: 15166 corp: 40/678b lim: 25 exec/s: 53 rss: 73Mb L: 18/25 MS: 1 CopyPart- 00:07:12.965 [2024-05-14 11:43:40.032353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:12.965 [2024-05-14 11:43:40.032385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:12.965 [2024-05-14 11:43:40.032432] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:12.965 [2024-05-14 11:43:40.032448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:12.965 [2024-05-14 11:43:40.032501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:12.965 [2024-05-14 11:43:40.032516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:12.965 [2024-05-14 11:43:40.032566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:12.965 [2024-05-14 11:43:40.032582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:12.965 #54 NEW cov: 12068 ft: 15205 corp: 41/699b lim: 25 exec/s: 54 rss: 73Mb L: 21/25 MS: 1 CopyPart- 00:07:13.224 [2024-05-14 11:43:40.072618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:13.224 [2024-05-14 11:43:40.072648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.224 [2024-05-14 11:43:40.072695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:13.224 [2024-05-14 11:43:40.072711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.224 [2024-05-14 11:43:40.072764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:13.224 [2024-05-14 11:43:40.072780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.224 [2024-05-14 11:43:40.072832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:13.224 [2024-05-14 11:43:40.072847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:13.224 [2024-05-14 11:43:40.072905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:13.224 [2024-05-14 11:43:40.072921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:13.224 #55 NEW cov: 12068 ft: 15228 corp: 42/724b lim: 25 exec/s: 27 rss: 73Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:13.224 #55 DONE cov: 12068 ft: 15228 corp: 42/724b lim: 25 exec/s: 27 rss: 73Mb 00:07:13.224 Done 55 runs in 2 second(s) 00:07:13.224 [2024-05-14 11:43:40.094399] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:13.224 11:43:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:13.224 [2024-05-14 11:43:40.263276] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:07:13.224 [2024-05-14 11:43:40.263349] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643843 ] 00:07:13.224 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.483 [2024-05-14 11:43:40.445770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.483 [2024-05-14 11:43:40.513571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.741 [2024-05-14 11:43:40.573096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.741 [2024-05-14 11:43:40.589046] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:13.741 [2024-05-14 11:43:40.589453] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:13.741 INFO: Running with entropic power schedule (0xFF, 100). 00:07:13.741 INFO: Seed: 4020848859 00:07:13.741 INFO: Loaded 1 modules (351429 inline 8-bit counters): 351429 [0x290e30c, 0x2963fd1), 00:07:13.741 INFO: Loaded 1 PC tables (351429 PCs): 351429 [0x2963fd8,0x2ec0c28), 00:07:13.741 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:13.741 INFO: A corpus is not provided, starting from an empty corpus 00:07:13.741 #2 INITED exec/s: 0 rss: 63Mb 00:07:13.741 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:13.741 This may also happen if the target rejected all inputs we tried so far 00:07:13.741 [2024-05-14 11:43:40.634914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444152907848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.741 [2024-05-14 11:43:40.634943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:13.741 [2024-05-14 11:43:40.634987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.741 [2024-05-14 11:43:40.635003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:13.741 [2024-05-14 11:43:40.635054] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.741 [2024-05-14 11:43:40.635067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:13.741 [2024-05-14 11:43:40.635122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:13.741 [2024-05-14 11:43:40.635138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.000 NEW_FUNC[1/687]: 0x4ad7c0 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:14.000 NEW_FUNC[2/687]: 0x4be420 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:14.000 #5 NEW cov: 11896 ft: 11894 corp: 2/97b lim: 100 exec/s: 0 rss: 70Mb L: 96/96 MS: 3 CopyPart-ChangeByte-InsertRepeatedBytes- 00:07:14.000 [2024-05-14 11:43:40.945768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444152907848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.000 [2024-05-14 11:43:40.945810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.000 [2024-05-14 11:43:40.945873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.000 [2024-05-14 11:43:40.945893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.000 [2024-05-14 11:43:40.945955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5188252634297419848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.000 [2024-05-14 11:43:40.945975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.000 [2024-05-14 11:43:40.946039] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.000 [2024-05-14 11:43:40.946058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.000 #11 NEW cov: 12026 ft: 12466 corp: 3/193b lim: 100 exec/s: 0 rss: 70Mb L: 96/96 MS: 1 ChangeBinInt- 00:07:14.000 [2024-05-14 11:43:40.995733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.000 [2024-05-14 11:43:40.995764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.000 [2024-05-14 11:43:40.995798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.000 [2024-05-14 11:43:40.995813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.000 [2024-05-14 11:43:40.995863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5188252634297419848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.000 [2024-05-14 11:43:40.995878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.000 [2024-05-14 11:43:40.995932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.000 [2024-05-14 11:43:40.995947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.000 #12 NEW cov: 12032 ft: 12824 corp: 4/289b lim: 100 exec/s: 0 rss: 70Mb L: 96/96 MS: 1 ChangeBinInt- 00:07:14.000 [2024-05-14 11:43:41.035986] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.000 [2024-05-14 11:43:41.036013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.000 [2024-05-14 11:43:41.036068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.000 [2024-05-14 11:43:41.036082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.000 [2024-05-14 11:43:41.036134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208492444341520456 len:24649 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.000 [2024-05-14 11:43:41.036149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.000 [2024-05-14 11:43:41.036200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.000 [2024-05-14 11:43:41.036214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.000 [2024-05-14 11:43:41.036268] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.000 [2024-05-14 11:43:41.036282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:14.000 #13 NEW cov: 12117 ft: 13074 corp: 5/389b lim: 100 exec/s: 0 rss: 70Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:07:14.000 [2024-05-14 11:43:41.086172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.000 [2024-05-14 11:43:41.086200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.001 [2024-05-14 11:43:41.086255] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.001 [2024-05-14 11:43:41.086271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.001 [2024-05-14 11:43:41.086325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208492444341520456 len:24649 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.001 [2024-05-14 11:43:41.086341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.001 [2024-05-14 11:43:41.086400] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.001 [2024-05-14 11:43:41.086416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.001 [2024-05-14 11:43:41.086468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.001 [2024-05-14 11:43:41.086483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:14.259 #19 NEW cov: 12117 ft: 13182 corp: 6/489b lim: 100 exec/s: 0 rss: 71Mb L: 100/100 MS: 1 ShuffleBytes- 00:07:14.259 [2024-05-14 11:43:41.135796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.259 [2024-05-14 11:43:41.135823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.259 [2024-05-14 11:43:41.135854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.259 [2024-05-14 11:43:41.135869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.259 #20 NEW cov: 12117 ft: 13615 corp: 7/544b lim: 100 exec/s: 0 rss: 71Mb L: 55/100 MS: 1 EraseBytes- 00:07:14.259 [2024-05-14 11:43:41.185843] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.259 [2024-05-14 11:43:41.185870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.259 #21 NEW cov: 12117 ft: 14434 corp: 8/582b lim: 100 exec/s: 0 rss: 71Mb L: 38/100 MS: 1 EraseBytes- 00:07:14.259 [2024-05-14 11:43:41.235925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444152907848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.259 [2024-05-14 11:43:41.235952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.259 #22 NEW cov: 12117 ft: 14459 corp: 9/603b lim: 100 exec/s: 0 rss: 71Mb L: 21/100 MS: 1 CrossOver- 00:07:14.259 [2024-05-14 11:43:41.276535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.259 [2024-05-14 11:43:41.276563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.259 [2024-05-14 11:43:41.276610] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341061704 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.259 [2024-05-14 11:43:41.276626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.259 [2024-05-14 11:43:41.276678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5188252634297419848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.259 [2024-05-14 11:43:41.276693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.259 [2024-05-14 11:43:41.276747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.259 [2024-05-14 11:43:41.276763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.259 #23 NEW cov: 12117 ft: 14541 corp: 10/699b lim: 100 exec/s: 0 rss: 71Mb L: 96/100 MS: 1 ChangeBinInt- 00:07:14.259 [2024-05-14 11:43:41.316597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444152907848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.259 [2024-05-14 11:43:41.316624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.259 [2024-05-14 11:43:41.316673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.259 [2024-05-14 11:43:41.316688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.259 [2024-05-14 11:43:41.316742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5188252634297419848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.259 [2024-05-14 11:43:41.316758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.259 [2024-05-14 11:43:41.316811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.259 [2024-05-14 11:43:41.316825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.259 #24 NEW cov: 12117 ft: 14578 corp: 11/795b lim: 100 exec/s: 0 rss: 71Mb L: 96/100 MS: 1 ShuffleBytes- 00:07:14.519 [2024-05-14 11:43:41.356849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444152907848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.356877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.356927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.356943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.356994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5188252634297419848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.357009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.357061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.357076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.357128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.357143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:14.519 #30 NEW cov: 12117 ft: 14636 corp: 12/895b lim: 100 exec/s: 0 rss: 71Mb L: 100/100 MS: 1 CopyPart- 00:07:14.519 [2024-05-14 11:43:41.396556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492298139289672 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.396585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.396619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18443 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.396636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.519 #31 NEW cov: 12117 ft: 14689 corp: 13/951b lim: 100 exec/s: 0 rss: 71Mb L: 56/100 MS: 1 InsertByte- 00:07:14.519 [2024-05-14 11:43:41.436951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444152907848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.436980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.437023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.437038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.437090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208492444339816520 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.437106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.437161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.437176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.519 #37 NEW cov: 12117 ft: 14780 corp: 14/1048b lim: 100 exec/s: 0 rss: 71Mb L: 97/100 MS: 1 InsertByte- 00:07:14.519 [2024-05-14 11:43:41.477026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444152907848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.477054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.477100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.477115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.477168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5188252634297419848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.477183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.477234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.477248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.519 #38 NEW cov: 12117 ft: 14843 corp: 15/1144b lim: 100 exec/s: 0 rss: 71Mb L: 96/100 MS: 1 ChangeBinInt- 00:07:14.519 [2024-05-14 11:43:41.517165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.517194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.517241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.517257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.517326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.517343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.517403] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.517420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.519 NEW_FUNC[1/1]: 0x19feca0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:14.519 #39 NEW cov: 12140 ft: 14884 corp: 16/1243b lim: 100 exec/s: 0 rss: 71Mb L: 99/100 MS: 1 CopyPart- 00:07:14.519 [2024-05-14 11:43:41.557305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.557333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.557385] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341061704 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.557401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.557453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5188252634297419848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.557468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.557521] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.557537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.519 #40 NEW cov: 12140 ft: 14914 corp: 17/1341b lim: 100 exec/s: 0 rss: 71Mb L: 98/100 MS: 1 CrossOver- 00:07:14.519 [2024-05-14 11:43:41.597124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.597152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.519 [2024-05-14 11:43:41.597185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.519 [2024-05-14 11:43:41.597200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.778 #41 NEW cov: 12140 ft: 14973 corp: 18/1396b lim: 100 exec/s: 0 rss: 71Mb L: 55/100 MS: 1 ChangeBit- 00:07:14.778 [2024-05-14 11:43:41.637527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444152907848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.637556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.637594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.637610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.637662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.637676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.637731] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.637750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.778 #42 NEW cov: 12140 ft: 14980 corp: 19/1490b lim: 100 exec/s: 42 rss: 72Mb L: 94/100 MS: 1 EraseBytes- 00:07:14.778 [2024-05-14 11:43:41.677628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:2398246276688988232 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.677655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.677702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341518664 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.677717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.677769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208413382583535688 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.677784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.677838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.677854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.778 #43 NEW cov: 12140 ft: 15004 corp: 20/1589b lim: 100 exec/s: 43 rss: 72Mb L: 99/100 MS: 1 InsertByte- 00:07:14.778 [2024-05-14 11:43:41.717820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.717850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.717891] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.717907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.717959] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.717973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.718026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492899608053832 len:46921 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.718043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.778 #44 NEW cov: 12140 ft: 15011 corp: 21/1688b lim: 100 exec/s: 44 rss: 72Mb L: 99/100 MS: 1 ChangeBinInt- 00:07:14.778 [2024-05-14 11:43:41.757597] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.757624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.757671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.757687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.778 #45 NEW cov: 12140 ft: 15020 corp: 22/1743b lim: 100 exec/s: 45 rss: 72Mb L: 55/100 MS: 1 CopyPart- 00:07:14.778 [2024-05-14 11:43:41.808016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444152907848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.808045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.808091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.808106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.808160] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208518832620587008 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.808192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.808248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.808262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:14.778 #46 NEW cov: 12140 ft: 15032 corp: 23/1839b lim: 100 exec/s: 46 rss: 72Mb L: 96/100 MS: 1 ShuffleBytes- 00:07:14.778 [2024-05-14 11:43:41.858142] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444152907848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.858171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.858217] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.858233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.858285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208518832620587008 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.858301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:14.778 [2024-05-14 11:43:41.858357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:14.778 [2024-05-14 11:43:41.858373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.037 #47 NEW cov: 12140 ft: 15097 corp: 24/1936b lim: 100 exec/s: 47 rss: 72Mb L: 97/100 MS: 1 InsertByte- 00:07:15.037 [2024-05-14 11:43:41.898418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.037 [2024-05-14 11:43:41.898444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.037 [2024-05-14 11:43:41.898502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.037 [2024-05-14 11:43:41.898517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.037 [2024-05-14 11:43:41.898572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208492444341520456 len:24649 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.037 [2024-05-14 11:43:41.898587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.037 [2024-05-14 11:43:41.898641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.037 [2024-05-14 11:43:41.898658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.037 [2024-05-14 11:43:41.898713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:5208492444343093320 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.037 [2024-05-14 11:43:41.898729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:15.038 #48 NEW cov: 12140 ft: 15107 corp: 25/2036b lim: 100 exec/s: 48 rss: 72Mb L: 100/100 MS: 1 CopyPart- 00:07:15.038 [2024-05-14 11:43:41.938363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1302123111085380114 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:41.938396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.038 [2024-05-14 11:43:41.938446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1302123111085380114 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:41.938462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.038 [2024-05-14 11:43:41.938513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1302123111085380114 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:41.938530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.038 [2024-05-14 11:43:41.938582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:1302123111085380114 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:41.938596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.038 #51 NEW cov: 12140 ft: 15126 corp: 26/2134b lim: 100 exec/s: 51 rss: 72Mb L: 98/100 MS: 3 CopyPart-ChangeByte-InsertRepeatedBytes- 00:07:15.038 [2024-05-14 11:43:41.978210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:41.978237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.038 [2024-05-14 11:43:41.978269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:41.978285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.038 #52 NEW cov: 12140 ft: 15155 corp: 27/2189b lim: 100 exec/s: 52 rss: 72Mb L: 55/100 MS: 1 ShuffleBytes- 00:07:15.038 [2024-05-14 11:43:42.018622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1302123111085380114 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:42.018649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.038 [2024-05-14 11:43:42.018698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1302123111085380114 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:42.018714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.038 [2024-05-14 11:43:42.018766] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1302123111085380114 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:42.018782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.038 [2024-05-14 11:43:42.018835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:1302304530503963154 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:42.018851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.038 #53 NEW cov: 12140 ft: 15173 corp: 28/2288b lim: 100 exec/s: 53 rss: 72Mb L: 99/100 MS: 1 InsertByte- 00:07:15.038 [2024-05-14 11:43:42.058723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444152907848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:42.058751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.038 [2024-05-14 11:43:42.058793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:42.058808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.038 [2024-05-14 11:43:42.058860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:42.058891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.038 [2024-05-14 11:43:42.058945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:42.058961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.038 #54 NEW cov: 12140 ft: 15189 corp: 29/2382b lim: 100 exec/s: 54 rss: 73Mb L: 94/100 MS: 1 ChangeBinInt- 00:07:15.038 [2024-05-14 11:43:42.108869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:42.108896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.038 [2024-05-14 11:43:42.108945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341061704 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:42.108958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.038 [2024-05-14 11:43:42.109010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5188252634297419848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:42.109025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.038 [2024-05-14 11:43:42.109078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.038 [2024-05-14 11:43:42.109094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.297 #55 NEW cov: 12140 ft: 15215 corp: 30/2479b lim: 100 exec/s: 55 rss: 73Mb L: 97/100 MS: 1 CrossOver- 00:07:15.297 [2024-05-14 11:43:42.149151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.297 [2024-05-14 11:43:42.149178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.297 [2024-05-14 11:43:42.149234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.297 [2024-05-14 11:43:42.149248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.297 [2024-05-14 11:43:42.149307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208492444341520456 len:24649 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.297 [2024-05-14 11:43:42.149322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.297 [2024-05-14 11:43:42.149374] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.297 [2024-05-14 11:43:42.149393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.297 [2024-05-14 11:43:42.149444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18395032156364603391 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.297 [2024-05-14 11:43:42.149458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:15.297 #61 NEW cov: 12140 ft: 15218 corp: 31/2579b lim: 100 exec/s: 61 rss: 73Mb L: 100/100 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:07:15.297 [2024-05-14 11:43:42.198815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.297 [2024-05-14 11:43:42.198843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.297 [2024-05-14 11:43:42.198876] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:2571 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.297 [2024-05-14 11:43:42.198890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.297 #62 NEW cov: 12140 ft: 15236 corp: 32/2634b lim: 100 exec/s: 62 rss: 73Mb L: 55/100 MS: 1 ChangeByte- 00:07:15.297 [2024-05-14 11:43:42.248815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492409793169480 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.297 [2024-05-14 11:43:42.248841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.297 #63 NEW cov: 12140 ft: 15267 corp: 33/2655b lim: 100 exec/s: 63 rss: 73Mb L: 21/100 MS: 1 ChangeBit- 00:07:15.297 [2024-05-14 11:43:42.298941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444152907848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.297 [2024-05-14 11:43:42.298968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.297 #64 NEW cov: 12140 ft: 15289 corp: 34/2676b lim: 100 exec/s: 64 rss: 73Mb L: 21/100 MS: 1 CopyPart- 00:07:15.297 [2024-05-14 11:43:42.349566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.297 [2024-05-14 11:43:42.349594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.297 [2024-05-14 11:43:42.349641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341061704 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.297 [2024-05-14 11:43:42.349656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.297 [2024-05-14 11:43:42.349707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5188252634297419848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.297 [2024-05-14 11:43:42.349722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.297 [2024-05-14 11:43:42.349773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.297 [2024-05-14 11:43:42.349791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.297 #65 NEW cov: 12140 ft: 15313 corp: 35/2774b lim: 100 exec/s: 65 rss: 73Mb L: 98/100 MS: 1 ChangeBinInt- 00:07:15.555 [2024-05-14 11:43:42.389648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444152907848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.389676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.555 [2024-05-14 11:43:42.389728] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.389742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.555 [2024-05-14 11:43:42.389793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208518832620587008 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.389808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.555 [2024-05-14 11:43:42.389859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.389874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.555 #66 NEW cov: 12140 ft: 15320 corp: 36/2872b lim: 100 exec/s: 66 rss: 73Mb L: 98/100 MS: 1 InsertByte- 00:07:15.555 [2024-05-14 11:43:42.429752] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444168177736 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.429779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.555 [2024-05-14 11:43:42.429829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.429845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.555 [2024-05-14 11:43:42.429896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208633181829875784 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.429911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.555 [2024-05-14 11:43:42.429964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492899608053832 len:46921 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.429979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.555 #67 NEW cov: 12140 ft: 15333 corp: 37/2971b lim: 100 exec/s: 67 rss: 74Mb L: 99/100 MS: 1 ChangeBit- 00:07:15.555 [2024-05-14 11:43:42.469408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208496807839680584 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.469434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.555 #68 NEW cov: 12140 ft: 15341 corp: 38/2992b lim: 100 exec/s: 68 rss: 74Mb L: 21/100 MS: 1 ChangeBit- 00:07:15.555 [2024-05-14 11:43:42.509641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492298139289672 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.509667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.555 [2024-05-14 11:43:42.509730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18529 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.509747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.555 #69 NEW cov: 12140 ft: 15423 corp: 39/3049b lim: 100 exec/s: 69 rss: 74Mb L: 57/100 MS: 1 InsertByte- 00:07:15.555 [2024-05-14 11:43:42.560046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:1302123111085380114 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.560073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.555 [2024-05-14 11:43:42.560121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1302123111085380114 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.560136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.555 [2024-05-14 11:43:42.560189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1302123111085380114 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.560204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.555 [2024-05-14 11:43:42.560257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:1302123111085380114 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.560272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.555 #70 NEW cov: 12140 ft: 15429 corp: 40/3148b lim: 100 exec/s: 70 rss: 74Mb L: 99/100 MS: 1 CopyPart- 00:07:15.555 [2024-05-14 11:43:42.600212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492444152907848 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.600240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.555 [2024-05-14 11:43:42.600291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.600304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:15.555 [2024-05-14 11:43:42.600357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5208518832620587008 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.555 [2024-05-14 11:43:42.600392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:15.555 [2024-05-14 11:43:42.600446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:5208492444341520456 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.556 [2024-05-14 11:43:42.600462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:15.556 #71 NEW cov: 12140 ft: 15432 corp: 41/3246b lim: 100 exec/s: 71 rss: 74Mb L: 98/100 MS: 1 InsertByte- 00:07:15.556 [2024-05-14 11:43:42.639867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5208492409793169480 len:18505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:15.556 [2024-05-14 11:43:42.639893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:15.815 #72 NEW cov: 12140 ft: 15437 corp: 42/3275b lim: 100 exec/s: 36 rss: 74Mb L: 29/100 MS: 1 CMP- DE: "\377\377~^\\\377\244\211"- 00:07:15.815 #72 DONE cov: 12140 ft: 15437 corp: 42/3275b lim: 100 exec/s: 36 rss: 74Mb 00:07:15.815 ###### Recommended dictionary. ###### 00:07:15.815 "\377\377\377\377\377\377\377\377" # Uses: 0 00:07:15.815 "\377\377~^\\\377\244\211" # Uses: 0 00:07:15.815 ###### End of recommended dictionary. ###### 00:07:15.815 Done 72 runs in 2 second(s) 00:07:15.815 [2024-05-14 11:43:42.661323] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:15.815 11:43:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:15.815 11:43:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:15.815 11:43:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:15.815 11:43:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:15.815 00:07:15.815 real 1m5.310s 00:07:15.815 user 1m40.575s 00:07:15.815 sys 0m8.079s 00:07:15.815 11:43:42 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.815 11:43:42 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:15.815 ************************************ 00:07:15.815 END TEST nvmf_fuzz 00:07:15.815 ************************************ 00:07:15.815 11:43:42 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:15.815 11:43:42 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:15.815 11:43:42 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:15.815 11:43:42 llvm_fuzz -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:15.815 11:43:42 llvm_fuzz -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.815 11:43:42 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:15.815 ************************************ 00:07:15.815 START TEST vfio_fuzz 00:07:15.815 ************************************ 00:07:15.815 11:43:42 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:16.077 * Looking for test storage... 00:07:16.077 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:16.077 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:16.078 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:16.078 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:16.078 #define SPDK_CONFIG_H 00:07:16.078 #define SPDK_CONFIG_APPS 1 00:07:16.078 #define SPDK_CONFIG_ARCH native 00:07:16.078 #undef SPDK_CONFIG_ASAN 00:07:16.078 #undef SPDK_CONFIG_AVAHI 00:07:16.078 #undef SPDK_CONFIG_CET 00:07:16.078 #define SPDK_CONFIG_COVERAGE 1 00:07:16.078 #define SPDK_CONFIG_CROSS_PREFIX 00:07:16.078 #undef SPDK_CONFIG_CRYPTO 00:07:16.078 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:16.078 #undef SPDK_CONFIG_CUSTOMOCF 00:07:16.078 #undef SPDK_CONFIG_DAOS 00:07:16.078 #define SPDK_CONFIG_DAOS_DIR 00:07:16.078 #define SPDK_CONFIG_DEBUG 1 00:07:16.078 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:16.078 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:16.078 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:16.078 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:16.078 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:16.078 #undef SPDK_CONFIG_DPDK_UADK 00:07:16.078 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:16.078 #define SPDK_CONFIG_EXAMPLES 1 00:07:16.078 #undef SPDK_CONFIG_FC 00:07:16.078 #define SPDK_CONFIG_FC_PATH 00:07:16.078 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:16.078 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:16.078 #undef SPDK_CONFIG_FUSE 00:07:16.078 #define SPDK_CONFIG_FUZZER 1 00:07:16.078 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:16.078 #undef SPDK_CONFIG_GOLANG 00:07:16.078 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:16.078 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:16.078 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:16.078 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:16.078 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:16.078 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:16.078 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:16.078 #define SPDK_CONFIG_IDXD 1 00:07:16.078 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:16.078 #undef SPDK_CONFIG_IPSEC_MB 00:07:16.078 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:16.078 #define SPDK_CONFIG_ISAL 1 00:07:16.078 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:16.078 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:16.078 #define SPDK_CONFIG_LIBDIR 00:07:16.078 #undef SPDK_CONFIG_LTO 00:07:16.078 #define SPDK_CONFIG_MAX_LCORES 00:07:16.078 #define SPDK_CONFIG_NVME_CUSE 1 00:07:16.078 #undef SPDK_CONFIG_OCF 00:07:16.078 #define SPDK_CONFIG_OCF_PATH 00:07:16.078 #define SPDK_CONFIG_OPENSSL_PATH 00:07:16.078 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:16.078 #define SPDK_CONFIG_PGO_DIR 00:07:16.078 #undef SPDK_CONFIG_PGO_USE 00:07:16.078 #define SPDK_CONFIG_PREFIX /usr/local 00:07:16.078 #undef SPDK_CONFIG_RAID5F 00:07:16.078 #undef SPDK_CONFIG_RBD 00:07:16.078 #define SPDK_CONFIG_RDMA 1 00:07:16.078 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:16.078 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:16.078 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:16.078 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:16.078 #undef SPDK_CONFIG_SHARED 00:07:16.078 #undef SPDK_CONFIG_SMA 00:07:16.078 #define SPDK_CONFIG_TESTS 1 00:07:16.078 #undef SPDK_CONFIG_TSAN 00:07:16.078 #define SPDK_CONFIG_UBLK 1 00:07:16.078 #define SPDK_CONFIG_UBSAN 1 00:07:16.078 #undef SPDK_CONFIG_UNIT_TESTS 00:07:16.078 #undef SPDK_CONFIG_URING 00:07:16.078 #define SPDK_CONFIG_URING_PATH 00:07:16.078 #undef SPDK_CONFIG_URING_ZNS 00:07:16.078 #undef SPDK_CONFIG_USDT 00:07:16.078 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:16.078 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:16.078 #define SPDK_CONFIG_VFIO_USER 1 00:07:16.078 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:16.078 #define SPDK_CONFIG_VHOST 1 00:07:16.078 #define SPDK_CONFIG_VIRTIO 1 00:07:16.078 #undef SPDK_CONFIG_VTUNE 00:07:16.078 #define SPDK_CONFIG_VTUNE_DIR 00:07:16.078 #define SPDK_CONFIG_WERROR 1 00:07:16.078 #define SPDK_CONFIG_WPDK_DIR 00:07:16.078 #undef SPDK_CONFIG_XNVME 00:07:16.078 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:16.078 11:43:42 llvm_fuzz.vfio_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:16.078 11:43:42 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:16.078 11:43:42 llvm_fuzz.vfio_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.078 11:43:42 llvm_fuzz.vfio_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.078 11:43:42 llvm_fuzz.vfio_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.078 11:43:42 llvm_fuzz.vfio_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.078 11:43:42 llvm_fuzz.vfio_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.078 11:43:42 llvm_fuzz.vfio_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.078 11:43:42 llvm_fuzz.vfio_fuzz -- paths/export.sh@5 -- # export PATH 00:07:16.078 11:43:42 llvm_fuzz.vfio_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.078 11:43:42 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:16.078 11:43:42 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:16.078 11:43:42 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # uname -s 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@57 -- # : 0 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@61 -- # : 0 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@63 -- # : 0 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@65 -- # : 1 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@67 -- # : 0 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@69 -- # : 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@71 -- # : 0 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@73 -- # : 0 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@75 -- # : 0 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@77 -- # : 0 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@79 -- # : 0 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@81 -- # : 0 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@83 -- # : 0 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:16.078 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@85 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@87 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@89 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@91 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@93 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@95 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@97 -- # : 1 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@99 -- # : 1 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@101 -- # : rdma 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@103 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@105 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@107 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@109 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@111 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@113 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@115 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@117 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@119 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@121 -- # : 1 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@123 -- # : 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@125 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@127 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@129 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@131 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@133 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@135 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@137 -- # : 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@139 -- # : true 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@141 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@143 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@145 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@147 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@149 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@151 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@153 -- # : 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@155 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@157 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@159 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@161 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@163 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@166 -- # : 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@168 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@170 -- # : 0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:16.079 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@199 -- # cat 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # export valgrind= 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # valgrind= 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@268 -- # uname -s 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@278 -- # MAKE=make 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j112 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@317 -- # [[ -z 3644218 ]] 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@317 -- # kill -0 3644218 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.6Euton 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.6Euton/tests/vfio /tmp/spdk.6Euton 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@326 -- # df -T 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=968232960 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4316196864 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=52945694720 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742305280 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=8796610560 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30866440192 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871150592 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=12342489088 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348461056 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=5971968 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=30869934080 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871154688 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=1220608 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # avails["$mount"]=6174224384 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174228480 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:16.080 * Looking for test storage... 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@371 -- # mount=/ 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@373 -- # target_space=52945694720 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # new_size=11011203072 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:16.080 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:16.080 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@388 -- # return 0 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1683 -- # true 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- ../common.sh@8 -- # pids=() 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- ../common.sh@70 -- # local time=1 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:16.081 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:16.081 11:43:43 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:16.081 [2024-05-14 11:43:43.124898] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:07:16.081 [2024-05-14 11:43:43.124965] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644328 ] 00:07:16.081 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.339 [2024-05-14 11:43:43.199186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.339 [2024-05-14 11:43:43.271506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.597 [2024-05-14 11:43:43.436066] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:16.597 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.597 INFO: Seed: 2570887927 00:07:16.597 INFO: Loaded 1 modules (348665 inline 8-bit counters): 348665 [0x28cfb4c, 0x2924d45), 00:07:16.597 INFO: Loaded 1 PC tables (348665 PCs): 348665 [0x2924d48,0x2e76cd8), 00:07:16.597 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:16.597 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.597 #2 INITED exec/s: 0 rss: 64Mb 00:07:16.597 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.597 This may also happen if the target rejected all inputs we tried so far 00:07:16.597 [2024-05-14 11:43:43.505582] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:17.114 NEW_FUNC[1/646]: 0x481740 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:17.114 NEW_FUNC[2/646]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:17.114 #3 NEW cov: 10916 ft: 10887 corp: 2/7b lim: 6 exec/s: 0 rss: 71Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:07:17.114 #13 NEW cov: 10930 ft: 13805 corp: 3/13b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 5 CrossOver-InsertByte-ChangeBit-ChangeBit-CopyPart- 00:07:17.372 NEW_FUNC[1/1]: 0x19cb1d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:17.372 #14 NEW cov: 10947 ft: 14676 corp: 4/19b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:17.631 #15 NEW cov: 10947 ft: 15754 corp: 5/25b lim: 6 exec/s: 15 rss: 73Mb L: 6/6 MS: 1 CopyPart- 00:07:17.889 #16 NEW cov: 10947 ft: 17229 corp: 6/31b lim: 6 exec/s: 16 rss: 73Mb L: 6/6 MS: 1 ChangeBit- 00:07:18.146 #17 NEW cov: 10947 ft: 17443 corp: 7/37b lim: 6 exec/s: 17 rss: 73Mb L: 6/6 MS: 1 ChangeBit- 00:07:18.404 #28 NEW cov: 10947 ft: 17477 corp: 8/43b lim: 6 exec/s: 28 rss: 74Mb L: 6/6 MS: 1 ShuffleBytes- 00:07:18.404 #41 NEW cov: 10954 ft: 17654 corp: 9/49b lim: 6 exec/s: 20 rss: 74Mb L: 6/6 MS: 3 CopyPart-ChangeByte-CMP- DE: "\001\000\000\000"- 00:07:18.404 #41 DONE cov: 10954 ft: 17654 corp: 9/49b lim: 6 exec/s: 20 rss: 74Mb 00:07:18.404 ###### Recommended dictionary. ###### 00:07:18.404 "\001\000\000\000" # Uses: 0 00:07:18.404 ###### End of recommended dictionary. ###### 00:07:18.404 Done 41 runs in 2 second(s) 00:07:18.404 [2024-05-14 11:43:45.489577] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:18.662 [2024-05-14 11:43:45.542410] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:18.662 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:18.662 11:43:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:18.922 [2024-05-14 11:43:45.773211] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:07:18.922 [2024-05-14 11:43:45.773281] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644783 ] 00:07:18.922 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.922 [2024-05-14 11:43:45.844141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.922 [2024-05-14 11:43:45.914762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.181 [2024-05-14 11:43:46.078730] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:19.181 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.181 INFO: Seed: 918921067 00:07:19.181 INFO: Loaded 1 modules (348665 inline 8-bit counters): 348665 [0x28cfb4c, 0x2924d45), 00:07:19.181 INFO: Loaded 1 PC tables (348665 PCs): 348665 [0x2924d48,0x2e76cd8), 00:07:19.181 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:19.181 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.181 #2 INITED exec/s: 0 rss: 64Mb 00:07:19.181 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.181 This may also happen if the target rejected all inputs we tried so far 00:07:19.181 [2024-05-14 11:43:46.152649] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:19.181 [2024-05-14 11:43:46.211416] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:19.181 [2024-05-14 11:43:46.211441] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:19.181 [2024-05-14 11:43:46.211484] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:19.698 NEW_FUNC[1/648]: 0x481ce0 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:19.698 NEW_FUNC[2/648]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:19.698 #12 NEW cov: 10916 ft: 10461 corp: 2/5b lim: 4 exec/s: 0 rss: 71Mb L: 4/4 MS: 5 ChangeBinInt-ShuffleBytes-InsertByte-ShuffleBytes-CopyPart- 00:07:19.698 [2024-05-14 11:43:46.709280] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:19.698 [2024-05-14 11:43:46.709314] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:19.698 [2024-05-14 11:43:46.709333] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:19.956 #13 NEW cov: 10930 ft: 13902 corp: 3/9b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ChangeByte- 00:07:19.956 [2024-05-14 11:43:46.917593] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:19.956 [2024-05-14 11:43:46.917616] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:19.956 [2024-05-14 11:43:46.917633] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:20.214 NEW_FUNC[1/1]: 0x19cb1d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:20.214 #28 NEW cov: 10947 ft: 14974 corp: 4/13b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 5 ChangeBinInt-ChangeBit-CopyPart-ShuffleBytes-CopyPart- 00:07:20.214 [2024-05-14 11:43:47.137828] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:20.214 [2024-05-14 11:43:47.137850] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:20.214 [2024-05-14 11:43:47.137867] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:20.214 #33 NEW cov: 10947 ft: 16114 corp: 5/17b lim: 4 exec/s: 33 rss: 74Mb L: 4/4 MS: 5 CrossOver-CMP-ChangeByte-ChangeBinInt-InsertByte- DE: "\015\000"- 00:07:20.471 [2024-05-14 11:43:47.346495] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:20.471 [2024-05-14 11:43:47.346517] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:20.471 [2024-05-14 11:43:47.346535] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:20.471 #39 NEW cov: 10947 ft: 16463 corp: 6/21b lim: 4 exec/s: 39 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:07:20.471 [2024-05-14 11:43:47.553513] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:20.471 [2024-05-14 11:43:47.553537] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:20.471 [2024-05-14 11:43:47.553554] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:20.728 #40 NEW cov: 10947 ft: 16584 corp: 7/25b lim: 4 exec/s: 40 rss: 74Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:20.728 [2024-05-14 11:43:47.760389] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:20.728 [2024-05-14 11:43:47.760413] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:20.728 [2024-05-14 11:43:47.760431] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:20.986 #41 NEW cov: 10947 ft: 16774 corp: 8/29b lim: 4 exec/s: 41 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:07:20.986 [2024-05-14 11:43:47.967630] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:20.986 [2024-05-14 11:43:47.967652] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:20.986 [2024-05-14 11:43:47.967670] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:21.244 #42 NEW cov: 10954 ft: 17195 corp: 9/33b lim: 4 exec/s: 42 rss: 74Mb L: 4/4 MS: 1 CopyPart- 00:07:21.244 [2024-05-14 11:43:48.177958] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:21.244 [2024-05-14 11:43:48.177980] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:21.244 [2024-05-14 11:43:48.177997] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:21.244 #53 NEW cov: 10954 ft: 17249 corp: 10/37b lim: 4 exec/s: 26 rss: 74Mb L: 4/4 MS: 1 ChangeBit- 00:07:21.244 #53 DONE cov: 10954 ft: 17249 corp: 10/37b lim: 4 exec/s: 26 rss: 74Mb 00:07:21.244 ###### Recommended dictionary. ###### 00:07:21.244 "\015\000" # Uses: 1 00:07:21.244 ###### End of recommended dictionary. ###### 00:07:21.244 Done 53 runs in 2 second(s) 00:07:21.244 [2024-05-14 11:43:48.319581] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:21.502 [2024-05-14 11:43:48.373028] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:21.502 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:21.502 11:43:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:21.761 [2024-05-14 11:43:48.605975] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:07:21.761 [2024-05-14 11:43:48.606052] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645323 ] 00:07:21.761 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.761 [2024-05-14 11:43:48.677323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.761 [2024-05-14 11:43:48.747624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.021 [2024-05-14 11:43:48.915394] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:22.021 INFO: Running with entropic power schedule (0xFF, 100). 00:07:22.021 INFO: Seed: 3757920615 00:07:22.021 INFO: Loaded 1 modules (348665 inline 8-bit counters): 348665 [0x28cfb4c, 0x2924d45), 00:07:22.021 INFO: Loaded 1 PC tables (348665 PCs): 348665 [0x2924d48,0x2e76cd8), 00:07:22.021 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:22.021 INFO: A corpus is not provided, starting from an empty corpus 00:07:22.021 #2 INITED exec/s: 0 rss: 64Mb 00:07:22.021 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:22.021 This may also happen if the target rejected all inputs we tried so far 00:07:22.021 [2024-05-14 11:43:48.989529] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:22.021 [2024-05-14 11:43:49.032623] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:22.540 NEW_FUNC[1/645]: 0x4826c0 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:22.540 NEW_FUNC[2/645]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:22.540 #11 NEW cov: 10887 ft: 10854 corp: 2/9b lim: 8 exec/s: 0 rss: 71Mb L: 8/8 MS: 4 ShuffleBytes-ChangeBit-InsertRepeatedBytes-InsertByte- 00:07:22.540 [2024-05-14 11:43:49.535474] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:22.797 #12 NEW cov: 10901 ft: 14489 corp: 3/17b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeByte- 00:07:22.797 [2024-05-14 11:43:49.744246] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:22.797 NEW_FUNC[1/1]: 0x19cb1d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:22.797 #13 NEW cov: 10921 ft: 15600 corp: 4/25b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ChangeBinInt- 00:07:23.142 [2024-05-14 11:43:49.948460] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:23.142 #16 NEW cov: 10921 ft: 16005 corp: 5/33b lim: 8 exec/s: 16 rss: 74Mb L: 8/8 MS: 3 EraseBytes-CrossOver-InsertByte- 00:07:23.142 [2024-05-14 11:43:50.165018] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:23.420 #17 NEW cov: 10921 ft: 16601 corp: 6/41b lim: 8 exec/s: 17 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:07:23.420 [2024-05-14 11:43:50.373516] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:23.420 #18 NEW cov: 10921 ft: 16901 corp: 7/49b lim: 8 exec/s: 18 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:07:23.678 [2024-05-14 11:43:50.583761] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:23.678 #19 NEW cov: 10921 ft: 17013 corp: 8/57b lim: 8 exec/s: 19 rss: 74Mb L: 8/8 MS: 1 CrossOver- 00:07:23.936 [2024-05-14 11:43:50.789602] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:23.936 #20 NEW cov: 10928 ft: 17138 corp: 9/65b lim: 8 exec/s: 20 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:23.936 [2024-05-14 11:43:50.999942] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:24.194 #21 NEW cov: 10928 ft: 17440 corp: 10/73b lim: 8 exec/s: 10 rss: 74Mb L: 8/8 MS: 1 ChangeBit- 00:07:24.194 #21 DONE cov: 10928 ft: 17440 corp: 10/73b lim: 8 exec/s: 10 rss: 74Mb 00:07:24.194 Done 21 runs in 2 second(s) 00:07:24.194 [2024-05-14 11:43:51.142586] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:24.194 [2024-05-14 11:43:51.190863] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:24.453 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:24.453 11:43:51 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:24.453 [2024-05-14 11:43:51.418085] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:07:24.453 [2024-05-14 11:43:51.418170] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645861 ] 00:07:24.453 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.453 [2024-05-14 11:43:51.490492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.712 [2024-05-14 11:43:51.562433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.712 [2024-05-14 11:43:51.728973] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:24.712 INFO: Running with entropic power schedule (0xFF, 100). 00:07:24.712 INFO: Seed: 2273957786 00:07:24.712 INFO: Loaded 1 modules (348665 inline 8-bit counters): 348665 [0x28cfb4c, 0x2924d45), 00:07:24.712 INFO: Loaded 1 PC tables (348665 PCs): 348665 [0x2924d48,0x2e76cd8), 00:07:24.712 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:24.712 INFO: A corpus is not provided, starting from an empty corpus 00:07:24.712 #2 INITED exec/s: 0 rss: 64Mb 00:07:24.712 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:24.712 This may also happen if the target rejected all inputs we tried so far 00:07:24.969 [2024-05-14 11:43:51.808038] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:25.228 NEW_FUNC[1/647]: 0x482da0 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:25.228 NEW_FUNC[2/647]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:25.228 #137 NEW cov: 10900 ft: 10875 corp: 2/33b lim: 32 exec/s: 0 rss: 70Mb L: 32/32 MS: 5 ChangeByte-InsertRepeatedBytes-ChangeBit-CrossOver-InsertRepeatedBytes- 00:07:25.488 #138 NEW cov: 10914 ft: 13631 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:25.746 NEW_FUNC[1/1]: 0x19cb1d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:25.746 #149 NEW cov: 10931 ft: 14956 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:07:26.005 #150 NEW cov: 10931 ft: 15538 corp: 5/129b lim: 32 exec/s: 150 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:26.264 #151 NEW cov: 10934 ft: 15720 corp: 6/161b lim: 32 exec/s: 151 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:26.264 #157 NEW cov: 10934 ft: 15761 corp: 7/193b lim: 32 exec/s: 157 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:07:26.522 #158 NEW cov: 10934 ft: 15805 corp: 8/225b lim: 32 exec/s: 158 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:07:26.780 #159 NEW cov: 10941 ft: 16229 corp: 9/257b lim: 32 exec/s: 159 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:07:27.037 #160 NEW cov: 10941 ft: 16238 corp: 10/289b lim: 32 exec/s: 80 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:07:27.037 #160 DONE cov: 10941 ft: 16238 corp: 10/289b lim: 32 exec/s: 80 rss: 73Mb 00:07:27.037 Done 160 runs in 2 second(s) 00:07:27.037 [2024-05-14 11:43:53.971569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:27.037 [2024-05-14 11:43:54.024922] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:27.296 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:27.296 11:43:54 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:27.296 [2024-05-14 11:43:54.255766] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:07:27.296 [2024-05-14 11:43:54.255842] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646398 ] 00:07:27.296 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.296 [2024-05-14 11:43:54.327237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.554 [2024-05-14 11:43:54.400202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.554 [2024-05-14 11:43:54.565422] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:27.554 INFO: Running with entropic power schedule (0xFF, 100). 00:07:27.554 INFO: Seed: 815975466 00:07:27.554 INFO: Loaded 1 modules (348665 inline 8-bit counters): 348665 [0x28cfb4c, 0x2924d45), 00:07:27.554 INFO: Loaded 1 PC tables (348665 PCs): 348665 [0x2924d48,0x2e76cd8), 00:07:27.554 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:27.554 INFO: A corpus is not provided, starting from an empty corpus 00:07:27.554 #2 INITED exec/s: 0 rss: 64Mb 00:07:27.554 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:27.554 This may also happen if the target rejected all inputs we tried so far 00:07:27.554 [2024-05-14 11:43:54.634828] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:28.071 NEW_FUNC[1/647]: 0x483620 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:28.071 NEW_FUNC[2/647]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:28.071 #152 NEW cov: 10906 ft: 10789 corp: 2/33b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 5 ChangeBit-CrossOver-ChangeBit-InsertRepeatedBytes-InsertByte- 00:07:28.329 #153 NEW cov: 10923 ft: 13920 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 CopyPart- 00:07:28.588 NEW_FUNC[1/1]: 0x19cb1d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:28.588 #154 NEW cov: 10940 ft: 14980 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ChangeByte- 00:07:28.846 #160 NEW cov: 10940 ft: 15235 corp: 5/129b lim: 32 exec/s: 160 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:07:28.846 #161 NEW cov: 10940 ft: 15534 corp: 6/161b lim: 32 exec/s: 161 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:29.104 #162 NEW cov: 10940 ft: 15688 corp: 7/193b lim: 32 exec/s: 162 rss: 73Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:29.362 #163 NEW cov: 10940 ft: 15712 corp: 8/225b lim: 32 exec/s: 163 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:07:29.621 #169 NEW cov: 10947 ft: 15869 corp: 9/257b lim: 32 exec/s: 169 rss: 73Mb L: 32/32 MS: 1 ChangeBit- 00:07:29.621 #175 NEW cov: 10947 ft: 15877 corp: 10/289b lim: 32 exec/s: 87 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:07:29.621 #175 DONE cov: 10947 ft: 15877 corp: 10/289b lim: 32 exec/s: 87 rss: 73Mb 00:07:29.621 Done 175 runs in 2 second(s) 00:07:29.621 [2024-05-14 11:43:56.671564] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:29.880 [2024-05-14 11:43:56.721232] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:29.880 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:29.881 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:29.881 11:43:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:29.881 [2024-05-14 11:43:56.949900] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:07:29.881 [2024-05-14 11:43:56.949971] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646761 ] 00:07:30.140 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.140 [2024-05-14 11:43:57.021463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.140 [2024-05-14 11:43:57.092956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.399 [2024-05-14 11:43:57.264339] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:30.399 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.399 INFO: Seed: 3516990537 00:07:30.399 INFO: Loaded 1 modules (348665 inline 8-bit counters): 348665 [0x28cfb4c, 0x2924d45), 00:07:30.399 INFO: Loaded 1 PC tables (348665 PCs): 348665 [0x2924d48,0x2e76cd8), 00:07:30.399 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:30.399 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.399 #2 INITED exec/s: 0 rss: 65Mb 00:07:30.399 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:30.399 This may also happen if the target rejected all inputs we tried so far 00:07:30.399 [2024-05-14 11:43:57.342126] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:30.399 [2024-05-14 11:43:57.401414] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:30.399 [2024-05-14 11:43:57.401449] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:30.918 NEW_FUNC[1/648]: 0x484020 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:30.918 NEW_FUNC[2/648]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:30.918 #33 NEW cov: 10914 ft: 10452 corp: 2/14b lim: 13 exec/s: 0 rss: 71Mb L: 13/13 MS: 1 InsertRepeatedBytes- 00:07:30.918 [2024-05-14 11:43:57.886704] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:30.918 [2024-05-14 11:43:57.886748] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:31.177 #44 NEW cov: 10928 ft: 13922 corp: 3/27b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:31.177 [2024-05-14 11:43:58.094626] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:31.177 [2024-05-14 11:43:58.094656] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:31.177 NEW_FUNC[1/1]: 0x19cb1d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:31.177 #45 NEW cov: 10945 ft: 15148 corp: 4/40b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 ChangeBit- 00:07:31.436 [2024-05-14 11:43:58.304519] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:31.436 [2024-05-14 11:43:58.304550] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:31.436 #46 NEW cov: 10945 ft: 16027 corp: 5/53b lim: 13 exec/s: 46 rss: 73Mb L: 13/13 MS: 1 ChangeBit- 00:07:31.436 [2024-05-14 11:43:58.516132] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:31.436 [2024-05-14 11:43:58.516161] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:31.694 #52 NEW cov: 10945 ft: 16312 corp: 6/66b lim: 13 exec/s: 52 rss: 73Mb L: 13/13 MS: 1 ChangeBit- 00:07:31.694 [2024-05-14 11:43:58.728701] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:31.694 [2024-05-14 11:43:58.728731] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:31.953 #53 NEW cov: 10945 ft: 16607 corp: 7/79b lim: 13 exec/s: 53 rss: 73Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:31.953 [2024-05-14 11:43:58.939723] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:31.953 [2024-05-14 11:43:58.939752] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:32.212 #54 NEW cov: 10945 ft: 16752 corp: 8/92b lim: 13 exec/s: 54 rss: 73Mb L: 13/13 MS: 1 CopyPart- 00:07:32.212 [2024-05-14 11:43:59.148958] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:32.212 [2024-05-14 11:43:59.148987] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:32.212 #60 NEW cov: 10952 ft: 16794 corp: 9/105b lim: 13 exec/s: 60 rss: 73Mb L: 13/13 MS: 1 ChangeByte- 00:07:32.470 [2024-05-14 11:43:59.355823] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:32.470 [2024-05-14 11:43:59.355853] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:32.470 #61 NEW cov: 10952 ft: 16846 corp: 10/118b lim: 13 exec/s: 30 rss: 73Mb L: 13/13 MS: 1 CopyPart- 00:07:32.470 #61 DONE cov: 10952 ft: 16846 corp: 10/118b lim: 13 exec/s: 30 rss: 73Mb 00:07:32.470 Done 61 runs in 2 second(s) 00:07:32.471 [2024-05-14 11:43:59.505569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:32.471 [2024-05-14 11:43:59.554902] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:32.730 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:32.730 11:43:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:32.730 [2024-05-14 11:43:59.783427] Starting SPDK v24.05-pre git sha1 b68ae4fb9 / DPDK 24.03.0 initialization... 00:07:32.730 [2024-05-14 11:43:59.783494] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647231 ] 00:07:32.730 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.988 [2024-05-14 11:43:59.855665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.988 [2024-05-14 11:43:59.927481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.246 [2024-05-14 11:44:00.094083] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:33.246 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.246 INFO: Seed: 2049024232 00:07:33.246 INFO: Loaded 1 modules (348665 inline 8-bit counters): 348665 [0x28cfb4c, 0x2924d45), 00:07:33.246 INFO: Loaded 1 PC tables (348665 PCs): 348665 [0x2924d48,0x2e76cd8), 00:07:33.247 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:33.247 INFO: A corpus is not provided, starting from an empty corpus 00:07:33.247 #2 INITED exec/s: 0 rss: 64Mb 00:07:33.247 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:33.247 This may also happen if the target rejected all inputs we tried so far 00:07:33.247 [2024-05-14 11:44:00.163944] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:33.247 [2024-05-14 11:44:00.231618] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:33.247 [2024-05-14 11:44:00.231652] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:33.764 NEW_FUNC[1/648]: 0x484d10 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:33.764 NEW_FUNC[2/648]: 0x487250 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:33.764 #49 NEW cov: 10908 ft: 10746 corp: 2/10b lim: 9 exec/s: 0 rss: 71Mb L: 9/9 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:33.764 [2024-05-14 11:44:00.718487] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:33.764 [2024-05-14 11:44:00.718531] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:33.764 #50 NEW cov: 10923 ft: 13616 corp: 3/19b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:34.023 [2024-05-14 11:44:00.933196] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:34.023 [2024-05-14 11:44:00.933226] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:34.023 NEW_FUNC[1/1]: 0x19cb1d0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:34.023 #51 NEW cov: 10940 ft: 15205 corp: 4/28b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:34.281 [2024-05-14 11:44:01.145724] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:34.281 [2024-05-14 11:44:01.145753] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:34.281 #52 NEW cov: 10940 ft: 15444 corp: 5/37b lim: 9 exec/s: 52 rss: 73Mb L: 9/9 MS: 1 ChangeBit- 00:07:34.281 [2024-05-14 11:44:01.360251] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:34.281 [2024-05-14 11:44:01.360279] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:34.540 #53 NEW cov: 10940 ft: 15715 corp: 6/46b lim: 9 exec/s: 53 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:34.540 [2024-05-14 11:44:01.574146] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:34.540 [2024-05-14 11:44:01.574177] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:34.798 #54 NEW cov: 10940 ft: 15919 corp: 7/55b lim: 9 exec/s: 54 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:34.798 [2024-05-14 11:44:01.786300] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:34.798 [2024-05-14 11:44:01.786330] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:35.056 #55 NEW cov: 10940 ft: 16247 corp: 8/64b lim: 9 exec/s: 55 rss: 73Mb L: 9/9 MS: 1 CopyPart- 00:07:35.056 [2024-05-14 11:44:01.998915] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:35.056 [2024-05-14 11:44:01.998945] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:35.056 #56 NEW cov: 10947 ft: 16483 corp: 9/73b lim: 9 exec/s: 56 rss: 73Mb L: 9/9 MS: 1 ChangeBit- 00:07:35.315 [2024-05-14 11:44:02.209418] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:35.315 [2024-05-14 11:44:02.209448] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:35.315 #62 NEW cov: 10947 ft: 16535 corp: 10/82b lim: 9 exec/s: 31 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:07:35.315 #62 DONE cov: 10947 ft: 16535 corp: 10/82b lim: 9 exec/s: 31 rss: 73Mb 00:07:35.315 Done 62 runs in 2 second(s) 00:07:35.315 [2024-05-14 11:44:02.355576] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:35.574 [2024-05-14 11:44:02.404946] app.c:1026:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:35.574 11:44:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:35.574 11:44:02 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:35.574 11:44:02 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.574 11:44:02 llvm_fuzz.vfio_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:35.574 00:07:35.574 real 0m19.723s 00:07:35.574 user 0m28.347s 00:07:35.574 sys 0m1.728s 00:07:35.574 11:44:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.574 11:44:02 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:35.574 ************************************ 00:07:35.574 END TEST vfio_fuzz 00:07:35.574 ************************************ 00:07:35.574 11:44:02 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:07:35.574 00:07:35.574 real 1m25.325s 00:07:35.574 user 2m9.022s 00:07:35.574 sys 0m10.013s 00:07:35.574 11:44:02 llvm_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.574 11:44:02 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:35.574 ************************************ 00:07:35.574 END TEST llvm_fuzz 00:07:35.574 ************************************ 00:07:35.832 11:44:02 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:07:35.832 11:44:02 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:07:35.832 11:44:02 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:07:35.832 11:44:02 -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:35.832 11:44:02 -- common/autotest_common.sh@10 -- # set +x 00:07:35.832 11:44:02 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:07:35.832 11:44:02 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:07:35.832 11:44:02 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:07:35.832 11:44:02 -- common/autotest_common.sh@10 -- # set +x 00:07:42.402 INFO: APP EXITING 00:07:42.402 INFO: killing all VMs 00:07:42.402 INFO: killing vhost app 00:07:42.402 INFO: EXIT DONE 00:07:44.936 Waiting for block devices as requested 00:07:44.936 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:45.194 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:45.194 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:45.194 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:45.194 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:45.453 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:45.453 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:45.453 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:45.711 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:45.711 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:45.711 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:45.970 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:45.970 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:45.970 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:46.229 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:46.229 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:46.229 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:07:49.514 Cleaning 00:07:49.514 Removing: /dev/shm/spdk_tgt_trace.pid3611503 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3609037 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3610302 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3611503 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3612206 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3613043 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3613323 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3614447 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3614478 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3614869 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3615194 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3615513 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3615850 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3616176 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3616469 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3616751 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3617059 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3617921 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3621068 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3621378 00:07:49.514 Removing: /var/run/dpdk/spdk_pid3621674 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3621847 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3622278 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3622517 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3622939 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3623090 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3623391 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3623658 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3623798 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3623971 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3624488 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3624660 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3625000 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3625355 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3625662 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3625783 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3626093 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3626503 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3626760 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3627042 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3627321 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3627608 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3627890 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3628179 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3628464 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3628744 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3629018 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3629224 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3629451 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3629671 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3629925 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3630213 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3630498 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3630786 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3631076 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3631356 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3631644 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3631718 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3632097 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3632774 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3633312 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3633645 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3634139 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3634676 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3635148 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3635498 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3636034 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3636567 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3636956 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3637388 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3637931 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3638432 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3638767 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3639286 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3639819 00:07:49.773 Removing: /var/run/dpdk/spdk_pid3640126 00:07:50.031 Removing: /var/run/dpdk/spdk_pid3640640 00:07:50.031 Removing: /var/run/dpdk/spdk_pid3641179 00:07:50.031 Removing: /var/run/dpdk/spdk_pid3641465 00:07:50.031 Removing: /var/run/dpdk/spdk_pid3642002 00:07:50.031 Removing: /var/run/dpdk/spdk_pid3642489 00:07:50.031 Removing: /var/run/dpdk/spdk_pid3642814 00:07:50.031 Removing: /var/run/dpdk/spdk_pid3643352 00:07:50.031 Removing: /var/run/dpdk/spdk_pid3643843 00:07:50.031 Removing: /var/run/dpdk/spdk_pid3644328 00:07:50.031 Removing: /var/run/dpdk/spdk_pid3644783 00:07:50.031 Removing: /var/run/dpdk/spdk_pid3645323 00:07:50.031 Removing: /var/run/dpdk/spdk_pid3645861 00:07:50.031 Removing: /var/run/dpdk/spdk_pid3646398 00:07:50.031 Removing: /var/run/dpdk/spdk_pid3646761 00:07:50.031 Removing: /var/run/dpdk/spdk_pid3647231 00:07:50.031 Clean 00:07:50.031 11:44:17 -- common/autotest_common.sh@1447 -- # return 0 00:07:50.031 11:44:17 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:07:50.031 11:44:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.031 11:44:17 -- common/autotest_common.sh@10 -- # set +x 00:07:50.031 11:44:17 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:07:50.031 11:44:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.031 11:44:17 -- common/autotest_common.sh@10 -- # set +x 00:07:50.031 11:44:17 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:50.031 11:44:17 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:07:50.031 11:44:17 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:07:50.032 11:44:17 -- spdk/autotest.sh@387 -- # hash lcov 00:07:50.032 11:44:17 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:07:50.290 11:44:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:50.290 11:44:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:50.290 11:44:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.290 11:44:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.290 11:44:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.290 11:44:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.290 11:44:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.290 11:44:17 -- paths/export.sh@5 -- $ export PATH 00:07:50.290 11:44:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.290 11:44:17 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:07:50.290 11:44:17 -- common/autobuild_common.sh@437 -- $ date +%s 00:07:50.290 11:44:17 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715679857.XXXXXX 00:07:50.290 11:44:17 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715679857.59lblp 00:07:50.290 11:44:17 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:07:50.290 11:44:17 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:07:50.290 11:44:17 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:07:50.290 11:44:17 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:07:50.290 11:44:17 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:07:50.290 11:44:17 -- common/autobuild_common.sh@453 -- $ get_config_params 00:07:50.290 11:44:17 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:07:50.290 11:44:17 -- common/autotest_common.sh@10 -- $ set +x 00:07:50.290 11:44:17 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:07:50.290 11:44:17 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:07:50.290 11:44:17 -- pm/common@17 -- $ local monitor 00:07:50.290 11:44:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:50.290 11:44:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:50.290 11:44:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:50.290 11:44:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:50.290 11:44:17 -- pm/common@25 -- $ sleep 1 00:07:50.290 11:44:17 -- pm/common@21 -- $ date +%s 00:07:50.290 11:44:17 -- pm/common@21 -- $ date +%s 00:07:50.290 11:44:17 -- pm/common@21 -- $ date +%s 00:07:50.290 11:44:17 -- pm/common@21 -- $ date +%s 00:07:50.290 11:44:17 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715679857 00:07:50.290 11:44:17 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715679857 00:07:50.290 11:44:17 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715679857 00:07:50.290 11:44:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715679857 00:07:50.290 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715679857_collect-vmstat.pm.log 00:07:50.290 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715679857_collect-cpu-load.pm.log 00:07:50.291 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715679857_collect-cpu-temp.pm.log 00:07:50.291 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715679857_collect-bmc-pm.bmc.pm.log 00:07:51.227 11:44:18 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:07:51.227 11:44:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:07:51.227 11:44:18 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:51.227 11:44:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:07:51.227 11:44:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:07:51.227 11:44:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:07:51.227 11:44:18 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:07:51.227 11:44:18 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:07:51.227 11:44:18 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:51.227 11:44:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:07:51.227 11:44:18 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:07:51.227 11:44:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:51.227 11:44:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:51.227 11:44:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:51.227 11:44:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:51.227 11:44:18 -- pm/common@44 -- $ pid=3654433 00:07:51.227 11:44:18 -- pm/common@50 -- $ kill -TERM 3654433 00:07:51.227 11:44:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:51.227 11:44:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:51.227 11:44:18 -- pm/common@44 -- $ pid=3654434 00:07:51.227 11:44:18 -- pm/common@50 -- $ kill -TERM 3654434 00:07:51.227 11:44:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:51.227 11:44:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:51.227 11:44:18 -- pm/common@44 -- $ pid=3654436 00:07:51.227 11:44:18 -- pm/common@50 -- $ kill -TERM 3654436 00:07:51.227 11:44:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:51.227 11:44:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:51.227 11:44:18 -- pm/common@44 -- $ pid=3654495 00:07:51.227 11:44:18 -- pm/common@50 -- $ sudo -E kill -TERM 3654495 00:07:51.227 + [[ -n 3502642 ]] 00:07:51.227 + sudo kill 3502642 00:07:51.495 [Pipeline] } 00:07:51.511 [Pipeline] // stage 00:07:51.516 [Pipeline] } 00:07:51.533 [Pipeline] // timeout 00:07:51.538 [Pipeline] } 00:07:51.554 [Pipeline] // catchError 00:07:51.559 [Pipeline] } 00:07:51.576 [Pipeline] // wrap 00:07:51.581 [Pipeline] } 00:07:51.598 [Pipeline] // catchError 00:07:51.607 [Pipeline] stage 00:07:51.608 [Pipeline] { (Epilogue) 00:07:51.621 [Pipeline] catchError 00:07:51.623 [Pipeline] { 00:07:51.636 [Pipeline] echo 00:07:51.637 Cleanup processes 00:07:51.641 [Pipeline] sh 00:07:52.021 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:52.021 3564410 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715679515 00:07:52.021 3564453 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715679515 00:07:52.021 3654675 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:07:52.021 3655497 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:52.034 [Pipeline] sh 00:07:52.316 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:52.316 ++ grep -v 'sudo pgrep' 00:07:52.316 ++ awk '{print $1}' 00:07:52.316 + sudo kill -9 3564410 3564453 00:07:52.327 [Pipeline] sh 00:07:52.608 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:07:52.608 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:07:52.608 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:07:53.991 [Pipeline] sh 00:07:54.272 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:07:54.272 Artifacts sizes are good 00:07:54.286 [Pipeline] archiveArtifacts 00:07:54.292 Archiving artifacts 00:07:54.344 [Pipeline] sh 00:07:54.627 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:07:54.640 [Pipeline] cleanWs 00:07:54.649 [WS-CLEANUP] Deleting project workspace... 00:07:54.649 [WS-CLEANUP] Deferred wipeout is used... 00:07:54.655 [WS-CLEANUP] done 00:07:54.657 [Pipeline] } 00:07:54.676 [Pipeline] // catchError 00:07:54.687 [Pipeline] sh 00:07:54.968 + logger -p user.info -t JENKINS-CI 00:07:54.976 [Pipeline] } 00:07:54.992 [Pipeline] // stage 00:07:54.997 [Pipeline] } 00:07:55.013 [Pipeline] // node 00:07:55.018 [Pipeline] End of Pipeline 00:07:55.052 Finished: SUCCESS